Quantcast
Channel: Dynamic Code Blocks
Viewing all 195 articles
Browse latest View live

Failed to grant permission to execute error and Zen Barcode for SSRS 2016 Reporting Services

$
0
0

Copy assemblies into the correct reporting services folder

When migrating to a new SQL server the barcodes were causing an issue, firstly because we lacked the barcode assemblies in the reporting services bin folder for the new version of Reporting Services that came with SQL server 2016.

Finding the dlls still in the path for the old Reporting Services server service, they were copied to new active location as shown:

Copy the Zen.Barcode.* files into the bin directory of currently active reporting services server

Permissions Error

After moving the required .dll into the correct location for this server, we then got a permission error.

Failed to load expression host assembly. Details: Could not load file or assembly 'Zen.Barcode.SSRS, Version=3.1.0.0, Culture=neutral, PublicKeyToken=b5ae55aa76d2d9de' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) (rsErrorLoadingExprHostAssembly)

 

rssrvpolicy.confg

Setting permissions to the assembly for reporting services is a matter of adding some grants to the policy file. Insert the following block after the last </CodeGroup> tag in that file.

"Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\rssrvpolicy.config"

<CodeGroupclass="UnionCodeGroup"Name="BarcodeControl"version="1"PermissionSetName="FullTrust"Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission.">
<IMembershipConditionclass="UrlMembershipCondition"version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.SSRS.dll"/>
</CodeGroup>
<CodeGroupclass="UnionCodeGroup"Name="BarcodeControl2"version="1"PermissionSetName="FullTrust"Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission.">
<IMembershipConditionclass="UrlMembershipCondition"version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.Core.dll"/>
</CodeGroup>
<CodeGroupclass="UnionCodeGroup"Name="BarcodeControl3"version="1"PermissionSetName="FullTrust"Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission.">
<IMembershipConditionclass="UrlMembershipCondition"version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.Design.dll"/>
</CodeGroup>
<CodeGroupclass="UnionCodeGroup"Name="BarcodeControl4"version="1"PermissionSetName="FullTrust"Description="This code group grants Zen.Barcode.SSRS.dll FullTrust permission.">
<IMembershipConditionclass="UrlMembershipCondition"version="1"
Url="C:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\ReportServer\bin\Zen.Barcode.SSRS.Design.dll"/>
</CodeGroup>

 

At this point we were able to start using the barcodes again in reporting services.

 

Reminder on setting up barcode in a report

 

Create references in the Report>>Properties for the .dll files as shown

Zen.Barcode SSRS references

 

In the code tab of the report properties set a function that will return an image binary  that will be used as an expression for the image source in the report.

2019-02-27_09-45-13

 

Drop an image into the report that will be the barcode and set the expression for the image MIME type as shown.
2019-02-27_09-43-33

 

Example of the expression for the image, in this example, using the value of another field as the source for generating the barcode.
2019-02-27_09-43-20

 

If you found this helpful- then do comment, it helps motivate me to document more of these kind of things.

 

Reference: https://www.barcoderesource.com/configurereportingservices.shtml


Yes you can - SQL Server Table Compression and Dynamics GP

$
0
0


2019-03-01_23-12-19

Today I attended SqlBits in Manchester UK, where there was a session “Performance tuning SQL server on crappy hardware” by Monica Rathbun.

Monica has a the fast and punchy presentation style I enjoyed. Although I had already experienced or knew most of what was covered it was still a good presentation. There was one take away I noted in my notebook to comeback to later. Now back at the hotel I’m having a look.

Row/Page compression - More Data in MEMORY

Monica was promoting the use of COMPRESSION – not just backup compression but ROW/PAGE database compression in the database engine itself.

By compressing the data in the database, the theory goes that you reduce I/O required to move the data around and allow much more relevant data to be held in SQL server’s caches and perhaps the underlying storage system’s caches too. Having more data in memory leads to a more performant system.

For some reason the existence of compression in the database as was something that had slipped under my radar, perhaps because it used to be an Enterprise feature but now its available to me in our SQL2016 Standard Edition.


This is particularly interesting to Dynamics GP users as our database is full of padded CHAR data types, has very wide tables full of only partially used data (depending on modules used) or repeating data in the case of settings flags. Dynamics GP also has many tables full of decimal columns that are all zero, again due to configuration or options in how GP is set up or what modules are active. So from the outset it feels like Dynamics GP would benefit.

“Enabling compression only changes the physical storage format of the data that is associated with a data type but not its syntax or semantics”. This means the compression occurs inside the SQL engine but is transparent to the application interacting with SQL server. There are two levels of compression of interest and available to us. ROW compression takes each data row in the table,

  • It uses variable-length storage format for numeric types (for example integer, decimal, and float) and the types that are based on numeric (for example datetime and money).

  • It stores fixed character strings by using variable-length format by not storing the blank characters.

So imagine how much room can be saved when you consider the fields in Dynamics GP are fixed length!

What is more there is another option, PAGE compression that looks at repeating data within the pages of data stored on the filesystem and compresses that data. As this is over an entire page its more heavy on CPU resources but is great where there is a lot of repeated data down the rows of a table. Wait, repeating data down rows of a column? – That is what we get lots of due to status flags and little used fields in the GP tables that vary little from top to bottom of the table.

Just look at something like Item Master table IV00101 or one of the pricing tables etc. There are distributions and settings that are the same, repeated for all items and are ripe for compression as this leads to repeated content in the pages.

Data repeated down from table IV00101 Item Master


So both the nature of the data in the tables and the use of compressible data types by Dynamics GP sure makes it look good for compression.


Compression does cause more CPU load, but unless you are pulling millions of rows then it seems insignificant, see more here:

https://sqlperformance.com/2017/01/sql-performance/compression-effect-on-performance where it is proved it has little effect.


We can run EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

This will show us, by sampling a subset of the table, much as the statistics does, how much space should be saved by compressing the table, without having to actually do it. Let try with Item Master in Dynamics GP.

SELECT COUNT(*) from IV00101


EXEC sp_estimate_data_compression_savings 'dbo', 'IV00101', NULL, NULL, 'PAGE' ;

Item Master compression test

So we can see the item master table goes from 81,944KB to 16,304KB that is only 20% of what it was!

No trying it with IV00108  that has SELECT COUNT(*) FROM IV00108 = 6,107,169 rows and we get 779,816 going down to 139,440, that is only 19% of what it was before.

Compression testing with IV00108

So you can see how much saving can be achieved this way, imagine the reduced I/O from having 20% of what used to be read.

Even going down to ROW compression gives you only slightly less compression but less overhead too:

2019-03-02_00-31-51

Only 46% of what it was with row level compression.


Downside

There are not many downsides. The first is technical, compressing the data take up CPU, most SQL servers are not CPU bound in terms of resources, so this should not be an issue. Typically 10-30% increase, so check your current CPU load. As this is a table by table selection, you could tackle only the main most sizable tables in GP to get the majority of the benefit without having to apply compression to every table and index. When the data is written you take a hit on compressing it to reap the rewards later. So tables and indexes with great numbers of inserts per second may cause issues (have to be big loads).

The article below has some good scripts to see what will work and what will not…

https://thomaslarock.com/2018/01/when-to-use-row-or-page-compression-in-sql-server/


Upside

Much smaller data means less I/O more in cache. And more data in memory to make for more efficient queries.


Summary

I am going to gradually add tables to compression and see what happens to CPU usage. The benefits should be substantial in terms of reads so it seems well worth pursuing.


Support

This article would indicate its supported for Dynamics GP, although the tool referenced for choosing tables to compress is no longer available, however it is possible to manually work with the database to turn on compression.

https://blogs.msdn.microsoft.com/nav/2011/07/21/sql-server-data-compression-and-microsoft-dynamics/


This is the reason going to conference is so worth while, this is only one of many things I leant or got reinforced today in the various sessions I attended.

Rebuild QTYONORDER of IV00102 Dynamics GP

$
0
0

I had to rebuild the quantity on purchase order of the inventory site qtys table today (On Order in Item Enquiry).

Wrote this SQL script that works for our install with our settings, but you should verify your install does not have other modules or modifications that may make this break.

 

Comment out the UPDATEs and uncomment the selects to see what is going to happen first, running SQL against your production environment is dangerous, test, test, test first. This is on the HIGH side of risky as a script to run. 

 

-- Rebuilds the qty on order in inventory from purchase order table
-- Don't know how manufacturing module would play with this
BEGIN TRANSACTION
-- Part 1 Correct the site/location qty on order from receipts and purchase orders
;
WITH CTE_ShippedQtys
AS (
SELECT PONUMBER
,POLNENUM
,SUM(QTYSHPPD*UMQTYINB )TotQTYSHIPPED
,SUM(QTYREPLACED*UMQTYINB ) TotQTYREPLACED
FROM POP10500
GROUP BY PONUMBER
,POLNENUM
)
,CTE_OnOrderQtys
AS (
SELECT SUM(CASE WHEN POTYPE IN (1,3) THEN ((QTYORDER - QTYCANCE)*POP10110.UMQTYINB) - ISNULL(TotQTYSHIPPED, 0)+ISNULL(TotQTYREPLACED,0) ELSE 0 END) AS QTYORDER
,ITEMNMBR
,LOCNCODE
FROM POP10110
LEFT JOIN CTE_ShippedQtys ON POP10110.PONUMBER = CTE_ShippedQtys.PONUMBER
AND pop10110.ORD = CTE_ShippedQtys.POLNENUM
WHERE POLNESTA IN (
2
,3
)
GROUP BY ITEMNMBR
,LOCNCODE
)
UPDATE IV00102
SET QTYONORD = ISNULL(QTYORDER,0)
--SELECT QTYONORD,QTYORDER, *
FROM IV00102
LEFT JOIN CTE_OnOrderQtys ON IV00102.ITEMNMBR = CTE_OnOrderQtys.ITEMNMBR
AND IV00102.LOCNCODE = CTE_OnOrderQtys.LOCNCODE
AND IV00102.LOCNCODE !=''
WHERE
IV00102.LOCNCODE!=''AND
(QTYONORD != QTYORDER
OR (QTYONORD!=0 AND QTYORDER IS NULL))
ROLLBACK TRANSACTION

--Summ up the location on order Qtys for the summary record
BEGIN TRANSACTION;
--Part 2 Correct the summary location
WITH CTE_SumLocations
AS (
SELECT SUM(QTYONORD) totqty
,ITEMNMBR
FROM IV00102
WHERE LOCNCODE != ''
GROUP BY ITEMNMBR
)
UPDATE IV00102
SET QTYONORD = totqty
--SELECT *
FROM IV00102
JOIN CTE_SumLocations ON CTE_SumLocations.ITEMNMBR = IV00102.ITEMNMBR
AND IV00102.LOCNCODE = ''
AND IV00102.RCRDTYPE = 1
AND QTYONORD != totqty

ROLLBACK TRANSACTION

Solution to Acrobat reader slow to attach email, Get a link to your file, How do you want to share your file

$
0
0

Our Dynamics GP uses reporting services for generating sales documents, these documents often need to be emailed to customers and suppliers. It was recently brought to my attention that pressing the share icon on Acrobat DC to send a PDF by email launches a new share window. This window seems to often stall or take 30 seconds or longer to open sometimes with a spinning icon. I assume this is an attempt by Acrobat to draw people in the cloud document management solutions. For users transaction processing day in day out, this mounts up over a day and is annoying and inefficient.

Although I've not checked, I think this is because it will be contacting the Adobe servers in the background and our corporate firewall blocks a lot of traffic. Admin users don't seem to suffer this problem, which may be evidence of this being the case. 

Solution to slow to send email in acrobat:

To enable the envelope icon on the tools menu, to directly jump to attaching an email in outlook, avoiding this slow overlay share window,  then a registry change is required. The change is documented in the following acrobat resource, How to use the email icon to send a PDF directly as email attachment 

Basically the following registry change is required for DC users. Having applied this change outlook opens in sub-seconds  with the PDF attached. 

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Adobe\Acrobat Reader\DC\FeatureLockDown]
"bSendMailShareRedirection"=dword:00000000

Find orphan note records in a Dynamics GP company database

$
0
0

About the Dynamics GP Notes table

Notes are held within the SY03900 company table in Dynamics GP. Every note in that notes table has a NOTEINDX, which is used by other tables in GP to reference that note. This makes the notes system extensible, when new modules/addins are created for GP,  those modules can simply piggy back off the existing notes table for their notes too by inserting notes into the table and referring to them by index. However this does have the disadvantage that looking at a random note in the notes table, you have no idea as to what records may be referencing that note. 

Hence, if you were to delete records from tables that reference note records, it orphans any note record for that deleted record. The note record can still exist, but the record that refers to it has been removed. An example of where this could happen is, if someone were to delete a sales order line using SQL DELETE, without also deleting the note record in SY03900 that is referenced by that sales order line record. 

You see, the note table does not have a "source" field to identify the "owner" table, so there is nothing in the note record to indicate from what table the note record originated. Thus you must check all the NOTEINDX fields in all the tables over the entire company database in order to find its owner.

Lucky for us, using dynamic SQL we can do this. We can query SQL server for the table definitions of all the tables in the database containing the field named NOTEINDX. We can then copy all the values that exist from each of those tables found into one big list of NOTEINDX values. Finally we can compare the notes table with that extracted big list of all valid values and find where we have a value in the notes table that does not exist in any of the tables in the GP company database. These will (most likely) be orphan records. Note you must exclude the SY03900 table itself when preparing this consolidated list!

Dynamic SQL to find all references to note records

The following script does this. It looks for any table that have a field named NOTEINDX and inserts all the note index numbers from those tables and fields into a temp table, from where you may join it back to the notes table to find the notes that no longer have reference in the database (orphaned). 

If you are then going to use the results from this to delete notes, beware as if you have a third party product that uses notes but does not name its reference to the note as NOTEINDX, so I'm saying use this with care, especially if you start removing notes based on it, check what they are first and gain confidence they are genuine orphans.

 

IF EXISTS (
SELECT *
FROM tempdb..sysobjects
WHERE NAME = '##NotesConsolidated'
)
DROP TABLE dbo.##NotesConsolidated

SET NOCOUNT ON

CREATE TABLE ##NotesConsolidated (
NOTEINDX numeric(19,5)
,TableName VARCHAR(1000)
)
GO
DECLARE @SqlStatement VARCHAR(500)
DECLARE @DatabaseName AS VARCHAR(5)
SET @DatabaseName = cast(db_name() AS VARCHAR(5))

DECLARE This_cursor CURSOR
FOR
SELECT 'INSERT INTO ##NotesConsolidated (NOTEINDX, TableName) select NOTEINDX, ''[' + SCHEMA_NAME(schema_id) + '].[' + t.NAME + ']'' from [' + SCHEMA_NAME(schema_id) + '].[' + t.NAME + ']'
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.NAME = 'NOTEINDX' AND NOT (t.name='SY03900' AND SCHEMA_NAME(schema_id)='dbo');
OPEN This_cursor

FETCH NEXT
FROM This_cursor
INTO @SqlStatement

WHILE (@@fetch_status <> - 1)
BEGIN
EXEC (@SqlStatement)

FETCH NEXT
FROM This_cursor
INTO @SqlStatement
END
DEALLOCATE This_cursor

SELECT * FROM SY03900
where NOTEINDX not in (
SELECT NOTEINDX FROM ##NotesConsolidated )

 

Results 

Examples of the four hundred and seven (yours will differ) tables containing references to notes in the notes table:

SQL indexed views are incompatible with Dynamics GP

$
0
0

Trying to use indexed views with Dynamics GP? – well I’ve been there and I have seen others attempt the same. The Dynamics GP database and application are not really compatible with SQL indexed views. To help out those searching around this subject I thought I should write up our experience.

What is an indexed view?

An indexed view is a SQL database view that has had an index applied to it. This sounds obvious, but the important thing to realise is that the index will be materialised as an index on disk. Think of it as another table created on disk representing the data held in the view, this is why it is quick to get the data if it is summary data. This index "table" is maintained whenever the data that it covers is altered. So it is easy to imagine that indexed views are great for creating summaries, filtered views of data that cross multiple tables, as they will keep track of changes automatically and update the index "table", thus keeping the data "in-sync" with the data in the various tables that it derives from. As the data that represents the SQL view has been pushed to disk and persisted there, querying that summary data is lightweight and quick, because the work has already been done to process the data when it was stored, pre-joined between tables and/or summarised, rather than the database engine having to do all that work on the fly for each query as the query is ran.

However, there is a downside. When (any) index is created, you pay a price for the benefit of fast data recall when reading, by suffering with slightly longer writes times and those longer writes lead to longer living locks on the data and then this can lead to blocking. This may cause issues with performance in some circumstances. The extra writing is because the data held in the index "table" that represents the view needs to be maintained (updated) whenever any of the data underpinning that view changes. Depending on the workload sizes of tables, IO performance etc, this may be significant work if large numbers of records are updated at once. Again as with any SQL index, it takes space on disk too. For large wide views over large tables this may be a consideration too bloating storage and the knock on consequences that brings.

One of the most frustrating matters when working with indexed views is that there are a whole heap of constraints and restrictions around what is permitted in the query that forms them. For example, when using GROUP BY, it must contain the function BIG_COUNT(*) as a column and various database settings restrictions can apply too (there are many, many more). This means when designing the simplest views SQL compile errors warning that it is not possible to do "this and that” frequently cause annoyance. It is very obvious why this is the case when you think about it. It is due to the fact the data is persisted to disk, so it needs to be unchanging to store it, thus any SQL function used by the view has to be deterministic -aha, no “getdate()” function! It is surprising how often this will catch people out. I could go on and on about the restrictions and requirements of indexed views, but just go try making one and you’ll discover the pain yourself, then go read up the documentation to realise how much there is to it!

Another example below are the SQL SET OPTIONS that are required by SQL indexed views…

IndexViewSettings

Using indexed view with Dynamics GP

Professionally we use indexed views a lot in our applications for the speed of access and real time integrity of summary data they offers, so sooner or later a GP admin or GP developer decides they would quite like to use an indexed view... -then it all ends in tears, let us see why…

WITH SCHEMABINDING

This is the first problem. To build an index view the view must use SCHEMABINDING. This locks the database schema and the view together. Unfortunately this may then cause issues with application updates. When Dynamics GP is updated for a service pack or upgrade, sometimes the tables or other database objects may need to be dropped and recreated. This happens during the upgrade process, however if the object that is to dropped is schema bound, then the upgrade script will not be allowed to do what it wants, causing the upgrade to fall over. Hence if indexed views have been bound to the GP tables this is a real risk when a site comes to do an upgrade. Obviously if the person performing the upgrade was aware, then they could drop the view and recreate it after the upgrade, but in real life the knowledge is lost as employees leave or contractors move on causing upgrade pain. 

The second problem is more terminal. Let's try it and see what happens if we create a view over the sales order header and lines to speed up the number of sales lines by country that and not voided in historical transactions. Create the view on a non-production database like this…

 

SET NUMERIC_ROUNDABORT OFF;
SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT,
    QUOTED_IDENTIFIER, ANSI_NULLS ON;
-- We must schema bind
IF OBJECT_ID ('SalesVoidedLinesSummary', 'view') IS NOT NULL
    DROP VIEW SalesVoidedLinesSummary ;
GO
CREATE VIEW SalesVoidedLinesSummary
    WITH SCHEMABINDING
    AS 
       SELECT  COUNT_BIG(*) Cnt,
               SOP30200.SOPTYPE,
               SOP30300.CCODE as Country
       FROM 
       dbo.SOP30200
       JOIN
       dbo.SOP30300 ON SOP30200.SOPTYPE=SOP30300.SOPTYPE AND SOP30200.SOPNUMBE=SOP30300.SOPNUMBE
       WHERE
       SOP30200.VOIDSTTS=0 --Not Voided
       AND SOP30200.SOPTYPE IN (1,2)
       GROUP BY  SOP30200.SOPTYPE, SOP30300.CCODE
GO

 

Then create the indexed view by adding the index. - REMEMBER I SAID NO PRODUCTION DATABASES WITH THIS LITTLE EXPERIMENT!


--We materialise the view by creating an index on it
CREATE UNIQUE CLUSTERED INDEX IX_SOP30200SOP30300VoidSummary
    ON SalesVoidedLinesSummary (SOPTYPE, Country);
GO

 

We can now query the view data like so..

SELECT  * FROM SalesVoidedLinesSummary where SOPTYPE=1 AND Country='PT'

and directly before we created the view with this…

SELECT  COUNT_BIG(*) Cnt,
               SOP30200.SOPTYPE,
               SOP30300.CCODE as Country
       FROM 
       dbo.SOP30200
       JOIN
       dbo.SOP30300 ON SOP30200.SOPTYPE=SOP30300.SOPTYPE AND SOP30200.SOPNUMBE=SOP30300.SOPNUMBE
       WHERE
       SOP30200.VOIDSTTS=0 --Voided
       AND SOP30200.SOPTYPE = 1
       AND SOP30300.CCODE='PT'
       GROUP BY  SOP30200.SOPTYPE, SOP30300.CCODE

 

...and we find that from not having the view to having the view we have dramatically reduced the query time as shown below. I’m not sure about the integrity of those figures but this isn’t a discussion around query optimisations, just accept that indexed views solve performance problems.

    Total execution time without view:  17390   
     Total execution time with view:        505   

 

In production

So developer or IT pro after testing that out on the test company database says, “great, now lets create it on the production database as that worked like a charm”, and then goes to login to GP and create a sales order having created the view…

...to then get the dismay of the error that follows...

“An edit operation on table ‘SOP_Master_Number_SETP’ failed. A record was already locked'.”

IndexViewGPError1

and

INSERT failed because the following SET options have incorrect settings ‘QUOTED_IDENTIFIER, CONCAT_NULL_YEALS_NULL,ANSI_WARNINGS’. Verify that SET options are correct for use with indexed views and”…

GPIndexViewError2

… then starts the support calls flowing in from the users! 

So why we can’t have nice things?

Go back up in this post, as you saw in the graphic, that there are certain SQL SET options required to make indexed views work. Sadly these are not compatible with the SET options required to make Dynamics GP work shown below! -thus the application breaks, simple as that! If those phones are still ringing from your users right now, then just simply drop the view and your users will be happy again.

We did play around a bit but never found a solution to this issue, but if you have a work around do let me know.

References

Mariano Gomez has a nice post about why this is required by Dexterity to make GP run correctly in this post:

Microsoft SQL Server DSN Configuration

Microsoft - Creating Indexed Views

Gavin wrote this which inspired me to document the issue before others fall into it...

Brief overview and comparison of how summary values are stored and calculated in Dynamics GP, Dynamics NAV and Dynamics 365 Business Central

 

Office Interop Access is denied fix for .NET automation of Microsoft Office

$
0
0

Problem, Could not load file or assembly … or one of its dependencies. Access is denied

This is the solution to a perplexing issue around Office automation, that many people do not seem to be finding a solution for on forums.

The story, on a newly deployed set of PCs, a big issue arose where the Microsoft Office Interop for our .NET application would fail with errors of “Access is denied” when launching our integrating application. Solutions that were attempted proved only to be a temporary fix, with the problem reverting back again, often overnight.

Could not load file or assembly Microsoft.Office.Interop.Excel.dll or one of its dependencies. Access is denied.

Could not load file or assembly Microsoft.Office.Interop.Outlook.dll or one of its dependencies. Access is denied.

It turns out this is to do with permissions to the Interop assemblies that are put in the Global Assembly Cache (GAC) and the Centennial version of the Office App.

 

Solutions attempted

 

In an attempt to resolve the issue, the usual culprits were investigated including many more ideas that I’ve now forgotten about:

  • Changing the permissions within the COM, component services control panel for the access, launch and activation permissions etc
  • Taking control of the assembly cache permissions and granting access for directories and files including…
    "C:\Windows\assembly\GAC_MSIL\Microsoft.Office.Interop.Outlook\15.0.0.0__71e9bce111e9429c\Microsoft.Office.Interop.Outlook.dll"
    "C:\Windows\assembly\GAC_MSIL\Microsoft.Office.Interop.Excel\15.0.0.0__71e9bce111e9429c\Microsoft.Office.Interop.Excel.dll"
    "C:\Windows\assembly\GAC_MSIL\Microsoft.Office.Interop.Word\15.0.0.0__71e9bce111e9429c\Microsoft.Office.Interop.Word.dll"
  • Uninstalling restarting, reinstalling Office
  • Uninstalling restarting, installing the Office interop via the Office installer options
  • Removing and replacing the local Office interop assemblies for the integrating application
  • Embedding the Interops in the application (difficult with Nuget packages tho?)
  • Using cacls.exe to update DCAL

Some of the above would work for the rest of that day, fixing the issue,  yet the problem would return the next morning (machines left on overnight).

 

Root cause

Finally, a hint of the issue was found from digging deeper and deeper in the forums, revealing the root cause.

“Office in the Windows Store” was released which is a different version of Office that is available through the windows store in Windows 10. This version of office is often referred to as Office Centennial version, as it utilises Project Centennial to allow office to be ported to an Office store app. Project Centennial is a bridging solution intended to make it easier for software developers to migrate existing applications to the store and app architecture in windows. This bridge was used to create the version of office in the store, which is distinct from the standard Office you would install from other media sources.

This store version of Office has been included in the Windows system image used by some OEM manufacturers of PCs, so new users have it ready to go on the machine when they get it to the Office or home. Being a store app, it gets refreshed by the auto updates of the Windows store… you can see where this is going?…

It would seem that this version of Centennial Office was installed on the disk image of these PCs and when full Office professional was installed on the machine, all is fine, until the (not in use) Centennial version of Office decides to repair itself overnight during the store update, corrupting the permissions on the Interop assemblies in the GAC, leaving them with only SYSTEM permissions, thus rendering them inaccessible to the end users.

Doing many of the aforementioned attempts to fix the issue, would overlay the correct permissions, but only until the Centennial version of Office went and undid the work overnight.

What is more annoying is that early on in the diagnostics for this issue I had a suspicion around this being the issue, but found no installer under, Start>>Apps and Features for  Microsoft Office Desktop Apps or Office, other than the pro copy. Also nothing in the programs and apps. Hence I’d assumed it can’t be the issue, wrongly as it turns out.

This seems to be an issue that has been hitting people from early 2019 through to 2020 when this blog post was written, although the OEMs were asked not the include the version of Office in the images anymore, so this should fade away over time.

It is a bit concerning that few people in forums seem to have found solutions, other than rebuilding the machine, but pushing on…

 

Removing Centennial Office

It seemed the obvious solution was to remove the store version of Office. It was not required and thus removing it should stop it overwriting the good install of Office. As noted before, there seemed to be no “uninstall” option anywhere in windows to get rid of the application. There were a few ways to remove this Office listed on internet forums, but the only one that worked for me was to launch power shell as an admin user. Then to run the following three commands, each in turn.

"(Get-AppxPackage -Name Microsoft.Office.Desktop).Dependencies | Remove-AppxPackage"
"Get-AppxPackage -Name Microsoft.Office.Desktop | Remove-AppxPackage"
"(Get-AppxPackage -Name Microsoft.Office.Desktop).Dependencies"

The final command should just confirm removal, by not returning anything.

Immediately it could be seen the assemblies had been removed from the “C:\Windows\assembly\GAC_MSIL\*” directories and after running the afflicted application, the issue would seem to be resolved. The professional office install was also repaired, via the Office Pro installer, for good measure.

A very annoying, obscure problem finally resolved!

 

References:

How do I uninstall Office 365 Windows Store version?

how to uninstall office 365 preloaded in windows 10

Why ‘Office in the Windows Store’ isn’t really Microsoft Office

Office.dll access denied to VSTO add-in

Installing Remote Server Administration Tools for Windows 10

$
0
0

Note to self, run following where the add features option in windows doesn't work. 

DISM.exe /Online /add-capability /CapabilityName:Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0 /CapabilityName:Rsat.BitLocker.Recovery.Tools~~~~0.0.1.0 /CapabilityName:Rsat.CertificateServices.Tools~~~~0.0.1.0 /CapabilityName:Rsat.DHCP.Tools~~~~0.0.1.0 /CapabilityName:Rsat.Dns.Tools~~~~0.0.1.0 /CapabilityName:Rsat.FailoverCluster.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.FileServices.Tools~~~~0.0.1.0 /CapabilityName:Rsat.GroupPolicy.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.IPAM.Client.Tools~~~~0.0.1.0 /CapabilityName:Rsat.LLDP.Tools~~~~0.0.1.0 /CapabilityName:Rsat.NetworkController.Tools~~~~0.0.1.0 /CapabilityName:Rsat.NetworkLoadBalancing.Tools~~~~0.0.1.0 /CapabilityName:Rsat.RemoteAccess.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.RemoteDesktop.Services.Tools~~~~0.0.1.0 /CapabilityName:Rsat.ServerManager.Tools~~~~0.0.1.0 /CapabilityName:Rsat.Shielded.VM.Tools~~~~0.0.1.0 /CapabilityName:Rsat.StorageReplica.Tools~~~~0.0.1.0 /CapabilityName:Rsat.VolumeActivation.Tools~~~~0.0.1.0 /CapabilityName:Rsat.WSUS.Tools~~~~0.0.1.0 /CapabilityName:Rsat.StorageMigrationService.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.SystemInsights.Management.Tools~~~~0.0.1.0




Dynamics GP will not remove hold or why timestamp format in GP matters

$
0
0

If Dynamics GP will not let a SOP module, transaction process hold be removed in the user interface, it may be due to formatting of time stamps. Check no integrating applications have created that hold incorrectly. 

The issue is that GP is real fussy about formats of timestamps, one place time stamps can cause problems is in use with the SOP Transaction Hold table, the following query illustrates what you need to know about this. 

SELECT [PRCHLDID], [HOLDDATE], [TIME1]
FROM SOP10104
WHERE SOPNUMBE='YourDocumentNumber' AND SOPTYPE=2

This query gives us some data to look at, that is if some holds exist for that document supplied to the SQL query...

The first thing to notice is that GP often splits/spreads the Time and the Date components of the point in time over two fields, in this case the point in time is represented by HOLDDATE and TIME1. These two fields are of the sql data type "datetime", something that can catch out people when new to GP databases as it may not be totally obvious what is going on. 

Lets look at the first row, it is for the hold type "CREDIT" and represents the point in time, 3rd of May 2020 at eleven minutes past three and 31 seconds

  • It is interesting and important to note that for the HOLDDATE field, it has a date but see how the date has zeros for all the time elements of that HOLDDATE datetime value. To fill up the unused time component, the time has been defaulted to midnight. A default time of zero (midnight) is used, as the unused half of the datetime sql data type may not be set to null (we need all those bytes). 
  • Now also note how the TIME1 field value has a time component, but the date has been defaulted to a "default" of the 1st of January 1900. This date by convention is used to fill the date component of the time, as the unused half of a datetime data type may not null. 
  • Further, it is vital to note that the milliseconds of the time has been zeroed too

Now if you disobey these format rules, the GP user interface will do weird things. More often than not, it just doesn't do anything, that can be really perplexing as a problem solve because the records to the untrained eye look fine and similar to other records right next to them. It may be as simple as including milliseconds (not zeroing them) that can prevent a user from removing a hold in the GP user interface!

These time stamps are used in this way for many tables, where the same rules apply. It is very important to provide a correctly formatted time and date field if integrating or manually scripting against the GP database. This is a really common rookie mistake for developers working with GP, its easy to get lazy or not notice the formatting to be wrong. It can also change behaviour and totals, as time span calculations may not result in the same outcome if time components are left in date fields etc. Imagine the impact when totalling values over time spans, urgh.  

So to save some head scratching, and to make a GP compatible timestamp for the current time field in .NET use the following code snippet:

DateTime now = new DateTime(1900, 1, 1).Add(DateTime.Now.TimeOfDay);
DateTime ForFieldUse= now - new TimeSpan(0, 0, 0, 0, now.TimeOfDay.Milliseconds);

Don't be tempted to simplify this and try to access the .now in two places on the same statement as you may find the times have shifted enough between calls to cause the milliseconds not to zero correctly as two different values would be returned and be subtracted, not resulting in zero! Date times are immutable so you can't simply zero the milliseconds component either.

To get the date only value for use in GP date fields, you may simply use the in build .NET method ".Today" as shown in this snippet;Snippet

DateTime.Today;

 

 Hopefully this will help get back up and running after Googling the problem you've just encountered! - If so do comment, it keeps me motivated to blog more.

Dynamics GP VAT rate change and how to deal with it

$
0
0

8th July 2020 and the UK government have announced that temporary reduction of VAT rate from 20% to 5% for hospitality sector (on food, accommodation and attractions), from 15th July until 12 January 2021. This gives you only one week to get advice from your ERP support on how to set up your system for the new VAT rate and for some (depending on your services and product mix), this will involve for the first time having to deal with different rates of tax on the same documents. Some people are SO going to be regretting not originally setting up invoice reports etc to dynamically pull the tax rates at the line level! Consultants are going to be super busy this week just helping people through the basics of reconfiguring their system to work with the new temporary VAT rate. This is also for many going to involve some consultancy to put the rates back in January (depending on how self sufficient the GP site is).

What should you think about?

You must  check in your supporting systems and documents, such as excel, your GP reports and reporting services reports for places where VAT rates have been “hard coded” (written in rather than looked up from system). These will need addressing, by preferably doing the extra work to dynamically look them up at run time, or replace with more generic wording if appropriate (say in terms and conditions of sale where VAT rate may have be specified in words.) As time is short you may just have be less perfect and note where they all are and go change them and get ready to swap the old report for the new report/document on the day and back again in January, assuming the rate actually goes back then…

Next you have three issues

  • Ensure new transactions from 15th July get the new 5% rate
  • Dealing with existing transaction particularly SOP document that will be fulfilled/processed after the 15th July as they will retain the previous tax rate unless further work to recalc is performed.
  • Putting the rate back again in January 2021

 

You have some avenues to explore:

  • Introduce a totally new Tax Schedule and Tax Detail IDs for the new rates. On the night of the change, update all the records to point at the new tax detail ids.
  • Just add new Tax Detail ID then take old one out of existing schedule.
  • Change the value on the exiting Tax Detail ID (not recommended).
  • Turn on effective tax dates feature (version GP2010+) and configure the effective records for the date in question. If no effective date exists it will fall back to the base tax rate so this will work well.
  • Use the Regenerate Tax tool to update your existing transactions.
  • Use a macro to update each document (depending on your transaction volume, could be very painful)

Some of the above options, provide for a clearer picture than others when it comes to reporting your VAT particularly don’t just change the tax percentage of the exiting Tax Detail, this is bound to give you issues, so consider that too.

Dynamics GP may from version GP2010 (with extra product) use Date-effective Tax. This allows a date range to be stored against a different tax rates for tax details , the appropriate tax rate then being used by a transaction based on that date in respect of document/posting/tax date (as configured). This is extra module that needs installing prior to GP 2015 R2, but after that it should be waiting for you to enable in company settings, if you haven't already.

There is also the useful tool of “Regenerate Taxes” that allow existing transactions to have the tax values updated, once the detail id has been updated.

Andy in his blog goes through some of the options with older versions of GP: All Change on 20% Tax Again January 2011 this should be read together with  Changing SOP Tax Rates using the Support Debugging Tool.

 

The “Whats New In Dynamics GP2015r2 document” explains all this nicely as copied below:

Date effective tax

You can now enter the tax rates in advance for transactions, and calculate the taxes based on the rates specified for a date range. You can also recalculate the taxes for the saved transactions by modifying the specified tax rates based on the latest tax rates specified by the tax authorities.

You can mass modify the tax rates, and regenerate the taxes for the saved transaction batches.

Follow these steps to set up a date effective tax:

1. Open the Company Setup Options window.

Company Setup Dynamics GP Effective Tax Dates

2. Microsoft Dynamics GP >> Tools >> Set up >> Company >> Company >> Options

3. In the Company Setup Options window, mark the Enable Tax Date checkbox, to allow you to use the tax date option for calculating the tax rate for the transaction based on a tax date.

4. Mark the Use Date-effective Tax checkbox. Select the option to calculate the tax. The option that you select here is used for updating the taxes for saved transactions within the tax effective date range. Transactions that are corrected will be based on the date you select here. You can select the Document date, Posting date or the Tax date to calculate the tax.

NOTE: Select posting date if you want to calculate tax based on the posting date that you specify. Select Tax date if you want to calculate tax based on the tax date that you specify. Select Document date if you want to calculate tax based on the document date that you specify.

You must set up the date range for the specified Tax ID to calculate the tax amount or percentage for the transaction. Follow these steps to create the date range for the tax ID.

1. Open the Date effective Tax Rates window.

DAte EFfective Tax Rates Window Dynamics GP

GP Tax Detail Window with Date Effective Tax Rates

2. Microsoft Dynamics GP >> Tools >> Set up >> Company >> Tax Details >> Select the tax ID in the Tax Detail ID field to set up the date range for >> Date effective Tax Rates.

3. Specify the date range for the specified tax amount or percentage. You can mass modify the tax percentage of multiple tax IDs.

You can choose to modify the tax percentage or amount for the taxes detail or the tax type based on date range, or for the tax without any specifying any date range, or both. Follow these steps to mass modify the tax percentage.

1. Open the Mass Modify Taxes window. Microsoft Dynamics GP >> Tools >> Utilities >> Company >> Mass Modify Tax Percentage

2. Select the option to modify the tax percentage.

You can choose:

• Date effective Tax – To update the dates specified in the date range, and the percentage that you have specified in the Date effective Tax rates window. What’s New in Microsoft Dynamics Page14
• Tax details – To update the tax percentage that you have specified in the Tax Detail Maintenance window.
• Both – To update the tax percentage that you have specified in the Date effective Tax rates window, and in the Tax Detail Maintenance window.

3. Click Insert to view the list of the tax IDs listed for the modification. NOTE: Within the specified Tax IDs, if you do not want to modify any Tax ID, you can select the Tax ID record, and click Remove.

4. Click Modify to modify the listed Tax IDs based on your specifications. You can also regenerate taxes for the transaction batches with the updated tax percentage. You can regenerate taxes for all the saved transactions of all the modules (except GL) or the selected module, only if you mark the use date effective tax in the Company Setup Options window.

You can specify the modules and the batches for which you want the taxes to be regenerated, or regenerate the taxes for all the batches in all the modules (except GL) at one time.

Follow these steps to regenerate the taxes for the saved batch transactions.

1. Open the Regenerate Taxes window. Microsoft Dynamics GP >> Tools >> Company >> Utilities >> Regenerate Taxes

2. Select the module to regenerate the updated tax for the batches. You can choose All to update the tax for the saved batch transactions in all the modules (except GL). Or, you can choose the module to update the tax for the saved batch transactions in the selected module.

3. If you choose a particular module to update the batch transactions, specify the batches in the batch range fields.

4. Click Insert to view the list of the batches that will be updated with the modified tax percentage. NOTE: You can select a batch ID record and click Remove, to prevent updating the transactions with the modified tax percentage.

5. Click Process to recalculate the tax for the transaction in the module and batch specified.

 Important:

1. The tax calculation of a transaction will be overridden if there is a date effective tax rate that exist for any tax detail.

2. If the tax calculation routine does not find the rate for a particular date range, then the percentage in the tax detail maintenance will be taken.

3. For the receiving transaction entry, only shipment/Invoice will be considered for date effective tax calculation.

4. For the returns transaction entry, only return with credit and inventory with credit type of transactions will be considered for date effective tax calculation.

5. You can regenerate taxes for transactions when workflow is active for the Receivables Management Batch Approval, Payables Management Transaction and Payables Management Batch Approval.

 

References:

How to handle VAT rate changes in Dynamics GP (perhaps dated now)

Changing Tax Rates and Date Effective Tax Rates in Dynamics GP

What’s New in Microsoft Dynamics GP (see 2015 R2 section)

Changing SOP Tax Rates using the Support Debugging Tool

All Change on 20% Tax Again January 2011

SSMS Registered Servers–Key not valid for use in specified state

$
0
0

After exporting the registered servers from SQL Server Management Studio’s “Registered Servers window” and then importing the exported list into a new machine. You may encounter the following error if passwords were stored against the servers…

Registered Servers    Key not valid for use in specified state. (System.Security)
Registered Servers - Key Not Valid for use in specified state.


This is due to the encryption key used to encrypt the passwords being specific to the machine the server details were exported from and it not existing on the new machine.

Even more annoying, this message will pop up once for each server with an issue. Those servers will be absent from the registered servers window in the new machine SSMS.


When I got around to fixing this, the original machine was long gone, so re-exporting the server list without passwords and re-importing it was no longer an option (this would have also have worked).

To solve this problem needed to remove the password from the affected registered servers, once the registered server list is properly populated again I can then simply edit the server properties to put the password back in for each server affected.


This registered server list is populated from an xml file found in the following location on your machine:

C:\Users\{username}\AppData\Roaming\Microsoft\Microsoft SQL Server\{sql version}\Tools\Shell\RegSrvr.xml


On opening this file and examining the XML, the snag was that some servers had been added since the server import. Some of those servers were named the same as the ones causing the error. Scrolling through the XML in Notepad, it was not easy to tell which was which.

To help I went back into the registered server list in SSMS and added a distinct comment to all the servers I could see and exited again to updated the XML file. On opening the file again, I searched for “Password=” in the file and removed the section Password=”jdsalkfjdalksjd”;– where the random text is the password and the xml node was of type ConnectionStringWithEncryptedPassword. I removed the whole password parameter as shown by highlighting.

Note that the password property of the connection string was only removed if the description node shown below did not appear, or was present but had another value not containing the unique text we entered earlier in description fields (in this case the unique text was “**MyMarkerText**”).

<RegisteredServers:Description type="string">**MyMarkerText**</RegisteredServers:Description>

Save the file and launch SSMS. The error will not appear and the missing registered servers will be available again. You must now right click and edit the properties of each to manually restore the passwords.

Dynamics GP Web Client is not compatible with Docker due to the Session Central Service not starting

$
0
0

Back in 2017 I attended NAVTechDays and after seeing the benefits the Dynamics NAV community were getting from Docker containers with NAV, I could see similar benefits for Dynamics GP and was also seeing interest in some of the conversations going on out of public within the GP community. I decided to see if I could get a good selection of Dynamics GP versions into Docker. Word of that work spread and brought out more people interested in containers for GP.

I spent a lot of hours attempting to learn Docker and to containerise Dynamics GP. I could build Docker images for the GP server database and add in various of the GP components like eConnect and oData. Sadly I was unable to get the Dynamics GP Web Client working – I really was just a cat's whisker away from success too. I was defeated by the Domain Account the Web Client runs under and its use of System.DirectoryServices.AccountManagement .  The following is a record of what I found when investigating that issue.

Although I can create a Docker container for the database and test company and even things like eConnect, there is a big problem. You see, there is no GUI with Docker containers, you can’t just remote desktop into one, you really need the Web Client running in order to make a self contained system to allow the host to interact with the Dynamics GP application in the container. If you can't access it from a browser, then you end up having to install the GP client on a non-docker machine (host) and connect using it to the running Docker container. Although this works its not really in the spirit of what I was trying to achieve, which was a fully self contained container for GP to allow multiple versions of GP to be easily spun up for testing and demo purposes. 

Why I failed?

After the Docker image has installed GP and the the web client product features are installed, one of these, the Session Central Service fails to start within the container. The Session Central Service is the component of the Dynamics GP web client installation that creates and tracks individual web client sessions as users connect and disconnect via web browsers. It is essential to getting web client to work.

At this point let me jump tight to the core issue, essentially the problem is that the “Lanman” windows service can’t be started in a Windows container. The Lanman service provides things like SMB file share services and is also referred to as the “Sever Service”, as that is how it appears, named in the services control panel list of services.

Containers are supposed to be stateless isolated single-app process groups, not mini virtual machines, so running Lanman would break that philosophy, hence it is not appropriate for it to be supported in Windows Docker containers as it doesn't make sense. Containers are supposed to be created and torn down at the drop of a hat, participating in active directory goes against the grain. This is why, along with a few other Windows features that are inappropriate for containers, we find the Lanman service just isn’t supported in a Windows Container.

So why is this a problem?

You find that the session central service, checks for the user running, see (2) in the following screenshot, and it does this using a call to the following method:

Dynamics.GP.Web.Foundation.DirectoryServices.PrincipalManager.GetPrincipal

This method in turn calls .NET method:

System.DirectoryServices.AccountManagement.Principal.FindByIdentityWithType(PrincipalContext  context, Type principalType, IdentityType identityType, String identityValue)

It is the above call that fails, and looking in the event log we see it reporting as indicated by (1) in the screenshot below that:

Exception message:
                      The Server service is not started.

Possible Root cause sml

Hey? - "The Server service is not started"– a confusing message as it means that the Lanman Service (aka server service) is not started. Yes this was super confusing error at the start of this investigation, not realising there was a "server service", so I assumed it was just a poor error message that wasn't telling me the service that had failed. So once I realised what it meant, I found out that, indeed if you go into the Docker container with Power Shell you can check that the Server Service is not started and even try to start it, however it will not start,  and this is due to the lack of support in windows containers, as previously stated. Lanman service can't be run on a windows container, thus preventing the use of .NETs System.DirectoryServices.AccountManagement. Microsoft in the past also confirm there are no plans for this to ever change, as it is fundamentally at odds with container philosophy to have this service running.

Without a small change to the Session Central Service code I was not going to get anywhere with this until…

Docker and Group Managed Service Account (gMSAs)

Hope then followed when I then discovered about the existence of Group Managed Service Account (gMSA), that I thought may provide a route to get around the lack of Lanman support. Using it we can authenticate to Active Directory resources from Windows container which is not part of your domain (remember the container is not part of domain), using gMSA. gMSA is a special account that can be used to (over simplifying it) allow passing over a domain account for the purposes of running a service. This could be perfect for replacing the user running the session central service, instead it can ran using a gMSA account for authentication. That in turn would bypass Lanman. When you run a container with a gMSA, the container host retrieves the gMSA password from an Active Directory domain controller and gives it to the container instance. The container will use the gMSA credentials whenever its computer account (SYSTEM) needs to access network resources. In our case, when building the Docker container we can use Power Shell to change the Session Central service to use the SYSTEM account having prepared the plumbing required for gMSA to authenticate against AD. SYSTEM become a kind of proxy for the AD account behind the gMSA account. For this to work certain prerequisites needs to be met, there is quite a bit of learning that is more than I want to get into here. I built this up in Docker to test.

Sadly on testing this thoroughly for a few days, it turns out there is another a problem. A docker container can access AD using the gMSAsaccount, allowing FindByIdentity to work WHEN the constructor for PrincipalContext is created with “ContextType.Domain” and specifying an accessible AD. The group managed service account technique does not work when the context type of “ContextType.Machine” is used to create the PrincipalContext.  

If we look at what is going on in the class in the screen shot– we find that the class is created with a constructor specifying machine as the context, stumping our attempts to use the gMSA to get around the lack of Lanman. This is due to no user name getting passed to the isMachineContext function (see last screen shot). I'm guessing that using the built in NETWORK user means no user name is passed into this method thus resulting in the highlighted code (1) being ran in the following screen shot.  This was upsetting to find as I thought I had it cracked at this point.

Not all that learning is a loss as it was interesting to learn about gMSA as it is an important container and enterprise computing concept to understand.

The following screenshot shows the code causing our issue (1) in the Microsoft.Dynamics.GP.Web.Foundation.DirectoryServices.PrincipalManager.

 

GetPrincipal2

 

User name being blank when calling the following IsMachineContext method then means the split in the logic in the above screen shot causes it to create the PrincipalContext using the wrong context type to use gMSA.

ismachinecontext

Finally


We can create containers with the database and test company but sadly we end up having to install the GP client on the host to access the container, that was not really my objective.

 Would be amazing if the code could be updated by Microsoft but I don’t see this happening. I need to refresh myself on the exact details of this issue and check again that nothing has changed in the years since I was trying as Docker is getting constantly improved with new features. 

Dynamics GP 2015 email–”Word experienced an error trying to open the file error sending email” when using Word Templates

$
0
0

When attempting to send a document that utilises word templates, even using PDF format, the following error may occur, due to a tightening in security with in the Microsoft Office product, “Word experienced an error trying to open the file”.

Word experienced an error trying to open the file. Try these suggestions Check the file permissions for the doucment or drve


Information about this issue can be found here: Word Templates will not Email/Print after Office Update

As I understand it the XML generated by Dynamics GP is not compliant and causes office versions with the patch to fail to load it. This is true also if you send the word document as a word document to someone, that was generated in Dynamics GP. If they have patched office, the document will not open.  When Dynamics GP sends a PDF/Word email it saves the document into a temp directory first then sends it, turning it into a PDF as required. This is why even PDF documents are affected too.

Solution for GP versions after 2016

The solution is to update with the latest GP patches if you are on supported version of GP 2016 and above (at time of writing).

Those updates found here:

Microsoft Dynamics GP U.S. Year-End Update 2020 RELEASED!!


Bodge solution for GP version 2015 – maybe previous version too, I’ve not tested

If you are running unsupported GP2015, then you are stuck without a solution, other than the correct one, that you really should be upgrading to a supported version of GP f9or these exact kind of reasons). In the real world there are many reasons why this may not be possible quite yet and you don’t want to postpone rolling out office updates, as for security reasons, you really shouldn’t hold back on updates.

There is a totally unsupported bodge you can use that may give you some time to upgrade whilst still benefiting from the latest office updates.  Please note this action should be considered carefully as it has not been talked about by Microsoft as an option.


Fix Dynamics GP 2015 Word experienced an error issue

To fix this issue for Dynamics 2015, in a totally unsupported way – this is at your own risk!

Download GP2016, more importantly, Download the latest GP 2016 cumulative update and install the client only, obviously you don’t want to upgrade your GP install yet, and have no need for the database components, perhaps do this on another machine for peace of mind. You must have the release that contains the bug fix, follow the link above. It must be the 2016 version of GP, this will not work if you do this with the 2018 version!


Download for GP2016 patch


In the Dynamics GP Program files folder of the 2016 install locate the following two files :

Microsoft.Dynamics.GP.BusinessIntelligence.Office.dll

Microsoft.Dynamics.GP.BusinessIntelligence.Office.dll 
Microsoft.Dynamics.GP.BusinessIntelligence.TemplateProcessing.dll

  • Ensure Dynamics GP is closed on the machine that you are working on
  • Also find the same files in the GP2015 install folder, take a copy of them onto your desktop or somewhere safe, so you have a way to go back if you need to
  • Now replace only these two files in the Dynamics GP2015 folder with those from the GP2016 folder.

You may now launch GP2015. You should find that the ability to send emails of Word Template based reports has now been restored. Roll these two files out to your other workstations that use word template reports.

These were the version numbers on those files from the attempt I made:

version 16.0.865.0 Microsoft.Dynamics.Gp.BusinessIntelligence.TemplateProcessing.dll

version 16.0.865.0 Microsoft.Dynamics.Gp.BusinessIntelligence.TemplateProcessing.dll


Note: This is totally unsupported method and use it at your own risk. However with this just been a reporting and office library dlls, the risk to GP should be low in my opinion. Although this is an unsupported change, if you are such an old version of GP you are running unsupported already to some extent anyway! Also note that this will only work with version 2016->2015. When I tried it with a more modern version of the dll, GP failed to open.

Find orphan note records in a Dynamics GP company database

$
0
0

About the Dynamics GP Notes table

Notes are held within the SY03900 company table in Dynamics GP. Every note in that notes table has a NOTEINDX, which is used by other tables in GP to reference that note. This makes the notes system extensible, when new modules/addins are created for GP,  those modules can simply piggy back off the existing notes table for their notes too by inserting notes into the table and referring to them by index. However this does have the disadvantage that looking at a random note in the notes table, you have no idea as to what records may be referencing that note. 

Hence, if you were to delete records from tables that reference note records, it orphans any note record for that deleted record. The note record can still exist, but the record that refers to it has been removed. An example of where this could happen is, if someone were to delete a sales order line using SQL DELETE, without also deleting the note record in SY03900 that is referenced by that sales order line record. 

You see, the note table does not have a "source" field to identify the "owner" table, so there is nothing in the note record to indicate from what table the note record originated. Thus you must check all the NOTEINDX fields in all the tables over the entire company database in order to find its owner.

Lucky for us, using dynamic SQL we can do this. We can query SQL server for the table definitions of all the tables in the database containing the field named NOTEINDX. We can then copy all the values that exist from each of those tables found into one big list of NOTEINDX values. Finally we can compare the notes table with that extracted big list of all valid values and find where we have a value in the notes table that does not exist in any of the tables in the GP company database. These will (most likely) be orphan records. Note you must exclude the SY03900 table itself when preparing this consolidated list!

Dynamic SQL to find all references to note records

The following script does this. It looks for any table that have a field named NOTEINDX and inserts all the note index numbers from those tables and fields into a temp table, from where you may join it back to the notes table to find the notes that no longer have reference in the database (orphaned). 

If you are then going to use the results from this to delete notes, beware as if you have a third party product that uses notes but does not name its reference to the note as NOTEINDX, so I'm saying use this with care, especially if you start removing notes based on it, check what they are first and gain confidence they are genuine orphans.

 

IF EXISTS (
SELECT *
FROM tempdb..sysobjects
WHERE NAME = '##NotesConsolidated'
)
DROP TABLE dbo.##NotesConsolidated

SET NOCOUNT ON

CREATE TABLE ##NotesConsolidated (
NOTEINDX numeric(19,5)
,TableName VARCHAR(1000)
)
GO
DECLARE @SqlStatement VARCHAR(500)
DECLARE @DatabaseName AS VARCHAR(5)
SET @DatabaseName = cast(db_name() AS VARCHAR(5))

DECLARE This_cursor CURSOR
FOR
SELECT 'INSERT INTO ##NotesConsolidated (NOTEINDX, TableName) select NOTEINDX, ''[' + SCHEMA_NAME(schema_id) + '].[' + t.NAME + ']'' from [' + SCHEMA_NAME(schema_id) + '].[' + t.NAME + ']'
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.NAME = 'NOTEINDX' AND NOT (t.name='SY03900' AND SCHEMA_NAME(schema_id)='dbo');
OPEN This_cursor

FETCH NEXT
FROM This_cursor
INTO @SqlStatement

WHILE (@@fetch_status <> - 1)
BEGIN
EXEC (@SqlStatement)

FETCH NEXT
FROM This_cursor
INTO @SqlStatement
END
DEALLOCATE This_cursor

SELECT * FROM SY03900
where NOTEINDX not in (
SELECT NOTEINDX FROM ##NotesConsolidated )

 

Results 

Examples of the four hundred and seven (yours will differ) tables containing references to notes in the notes table:

SQL indexed views are incompatible with Dynamics GP

$
0
0

Trying to use indexed views with Dynamics GP? – well I’ve been there and I have seen others attempt the same. The Dynamics GP database and application are not really compatible with SQL indexed views. To help out those searching around this subject I thought I should write up our experience.

What is an indexed view?

An indexed view is a SQL database view that has had an index applied to it. This sounds obvious, but the important thing to realise is that the index will be materialised as an index on disk. Think of it as another table created on disk representing the data held in the view, this is why it is quick to get the data if it is summary data. This index "table" is maintained whenever the data that it covers is altered. So it is easy to imagine that indexed views are great for creating summaries, filtered views of data that cross multiple tables, as they will keep track of changes automatically and update the index "table", thus keeping the data "in-sync" with the data in the various tables that it derives from. As the data that represents the SQL view has been pushed to disk and persisted there, querying that summary data is lightweight and quick, because the work has already been done to process the data when it was stored, pre-joined between tables and/or summarised, rather than the database engine having to do all that work on the fly for each query as the query is ran.

However, there is a downside. When (any) index is created, you pay a price for the benefit of fast data recall when reading, by suffering with slightly longer writes times and those longer writes lead to longer living locks on the data and then this can lead to blocking. This may cause issues with performance in some circumstances. The extra writing is because the data held in the index "table" that represents the view needs to be maintained (updated) whenever any of the data underpinning that view changes. Depending on the workload sizes of tables, IO performance etc, this may be significant work if large numbers of records are updated at once. Again as with any SQL index, it takes space on disk too. For large wide views over large tables this may be a consideration too bloating storage and the knock on consequences that brings.

One of the most frustrating matters when working with indexed views is that there are a whole heap of constraints and restrictions around what is permitted in the query that forms them. For example, when using GROUP BY, it must contain the function BIG_COUNT(*) as a column and various database settings restrictions can apply too (there are many, many more). This means when designing the simplest views SQL compile errors warning that it is not possible to do "this and that” frequently cause annoyance. It is very obvious why this is the case when you think about it. It is due to the fact the data is persisted to disk, so it needs to be unchanging to store it, thus any SQL function used by the view has to be deterministic -aha, no “getdate()” function! It is surprising how often this will catch people out. I could go on and on about the restrictions and requirements of indexed views, but just go try making one and you’ll discover the pain yourself, then go read up the documentation to realise how much there is to it!

Another example below are the SQL SET OPTIONS that are required by SQL indexed views…

IndexViewSettings

Using indexed view with Dynamics GP

Professionally we use indexed views a lot in our applications for the speed of access and real time integrity of summary data they offers, so sooner or later a GP admin or GP developer decides they would quite like to use an indexed view... -then it all ends in tears, let us see why…

WITH SCHEMABINDING

This is the first problem. To build an index view the view must use SCHEMABINDING. This locks the database schema and the view together. Unfortunately this may then cause issues with application updates. When Dynamics GP is updated for a service pack or upgrade, sometimes the tables or other database objects may need to be dropped and recreated. This happens during the upgrade process, however if the object that is to dropped is schema bound, then the upgrade script will not be allowed to do what it wants, causing the upgrade to fall over. Hence if indexed views have been bound to the GP tables this is a real risk when a site comes to do an upgrade. Obviously if the person performing the upgrade was aware, then they could drop the view and recreate it after the upgrade, but in real life the knowledge is lost as employees leave or contractors move on causing upgrade pain. 

The second problem is more terminal. Let's try it and see what happens if we create a view over the sales order header and lines to speed up the number of sales lines by country that and not voided in historical transactions. Create the view on a non-production database like this…

 

SET NUMERIC_ROUNDABORT OFF;
SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT,
    QUOTED_IDENTIFIER, ANSI_NULLS ON;
-- We must schema bind
IF OBJECT_ID ('SalesVoidedLinesSummary', 'view') IS NOT NULL
    DROP VIEW SalesVoidedLinesSummary ;
GO
CREATE VIEW SalesVoidedLinesSummary
    WITH SCHEMABINDING
    AS 
       SELECT  COUNT_BIG(*) Cnt,
               SOP30200.SOPTYPE,
               SOP30300.CCODE as Country
       FROM 
       dbo.SOP30200
       JOIN
       dbo.SOP30300 ON SOP30200.SOPTYPE=SOP30300.SOPTYPE AND SOP30200.SOPNUMBE=SOP30300.SOPNUMBE
       WHERE
       SOP30200.VOIDSTTS=0 --Not Voided
       AND SOP30200.SOPTYPE IN (1,2)
       GROUP BY  SOP30200.SOPTYPE, SOP30300.CCODE
GO

 

Then create the indexed view by adding the index. - REMEMBER I SAID NO PRODUCTION DATABASES WITH THIS LITTLE EXPERIMENT!


--We materialise the view by creating an index on it
CREATE UNIQUE CLUSTERED INDEX IX_SOP30200SOP30300VoidSummary
    ON SalesVoidedLinesSummary (SOPTYPE, Country);
GO

 

We can now query the view data like so..

SELECT  * FROM SalesVoidedLinesSummary where SOPTYPE=1 AND Country='PT'

and directly before we created the view with this…

SELECT  COUNT_BIG(*) Cnt,
               SOP30200.SOPTYPE,
               SOP30300.CCODE as Country
       FROM 
       dbo.SOP30200
       JOIN
       dbo.SOP30300 ON SOP30200.SOPTYPE=SOP30300.SOPTYPE AND SOP30200.SOPNUMBE=SOP30300.SOPNUMBE
       WHERE
       SOP30200.VOIDSTTS=0 --Voided
       AND SOP30200.SOPTYPE = 1
       AND SOP30300.CCODE='PT'
       GROUP BY  SOP30200.SOPTYPE, SOP30300.CCODE

 

...and we find that from not having the view to having the view we have dramatically reduced the query time as shown below. I’m not sure about the integrity of those figures but this isn’t a discussion around query optimisations, just accept that indexed views solve performance problems.

    Total execution time without view:  17390   
     Total execution time with view:        505   

 

In production

So developer or IT pro after testing that out on the test company database says, “great, now lets create it on the production database as that worked like a charm”, and then goes to login to GP and create a sales order having created the view…

...to then get the dismay of the error that follows...

“An edit operation on table ‘SOP_Master_Number_SETP’ failed. A record was already locked'.”

IndexViewGPError1

and

INSERT failed because the following SET options have incorrect settings ‘QUOTED_IDENTIFIER, CONCAT_NULL_YEALS_NULL,ANSI_WARNINGS’. Verify that SET options are correct for use with indexed views and”…

GPIndexViewError2

… then starts the support calls flowing in from the users! 

So why we can’t have nice things?

Go back up in this post, as you saw in the graphic, that there are certain SQL SET options required to make indexed views work. Sadly these are not compatible with the SET options required to make Dynamics GP work shown below! -thus the application breaks, simple as that! If those phones are still ringing from your users right now, then just simply drop the view and your users will be happy again.

We did play around a bit but never found a solution to this issue, but if you have a work around do let me know.

References

Mariano Gomez has a nice post about why this is required by Dexterity to make GP run correctly in this post:

Microsoft SQL Server DSN Configuration

Microsoft - Creating Indexed Views

Gavin wrote this which inspired me to document the issue before others fall into it...

Brief overview and comparison of how summary values are stored and calculated in Dynamics GP, Dynamics NAV and Dynamics 365 Business Central

 


Office Interop Access is denied fix for .NET automation of Microsoft Office

$
0
0

Problem, Could not load file or assembly … or one of its dependencies. Access is denied

This is the solution to a perplexing issue around Office automation, that many people do not seem to be finding a solution for on forums.

The story, on a newly deployed set of PCs, a big issue arose where the Microsoft Office Interop for our .NET application would fail with errors of “Access is denied” when launching our integrating application. Solutions that were attempted proved only to be a temporary fix, with the problem reverting back again, often overnight.

Could not load file or assembly Microsoft.Office.Interop.Excel.dll or one of its dependencies. Access is denied.

Could not load file or assembly Microsoft.Office.Interop.Outlook.dll or one of its dependencies. Access is denied.

It turns out this is to do with permissions to the Interop assemblies that are put in the Global Assembly Cache (GAC) and the Centennial version of the Office App.

 

Solutions attempted

 

In an attempt to resolve the issue, the usual culprits were investigated including many more ideas that I’ve now forgotten about:

  • Changing the permissions within the COM, component services control panel for the access, launch and activation permissions etc
  • Taking control of the assembly cache permissions and granting access for directories and files including…
    "C:\Windows\assembly\GAC_MSIL\Microsoft.Office.Interop.Outlook\15.0.0.0__71e9bce111e9429c\Microsoft.Office.Interop.Outlook.dll"
    "C:\Windows\assembly\GAC_MSIL\Microsoft.Office.Interop.Excel\15.0.0.0__71e9bce111e9429c\Microsoft.Office.Interop.Excel.dll"
    "C:\Windows\assembly\GAC_MSIL\Microsoft.Office.Interop.Word\15.0.0.0__71e9bce111e9429c\Microsoft.Office.Interop.Word.dll"
  • Uninstalling restarting, reinstalling Office
  • Uninstalling restarting, installing the Office interop via the Office installer options
  • Removing and replacing the local Office interop assemblies for the integrating application
  • Embedding the Interops in the application (difficult with Nuget packages tho?)
  • Using cacls.exe to update DCAL

Some of the above would work for the rest of that day, fixing the issue,  yet the problem would return the next morning (machines left on overnight).

 

Root cause

Finally, a hint of the issue was found from digging deeper and deeper in the forums, revealing the root cause.

“Office in the Windows Store” was released which is a different version of Office that is available through the windows store in Windows 10. This version of office is often referred to as Office Centennial version, as it utilises Project Centennial to allow office to be ported to an Office store app. Project Centennial is a bridging solution intended to make it easier for software developers to migrate existing applications to the store and app architecture in windows. This bridge was used to create the version of office in the store, which is distinct from the standard Office you would install from other media sources.

This store version of Office has been included in the Windows system image used by some OEM manufacturers of PCs, so new users have it ready to go on the machine when they get it to the Office or home. Being a store app, it gets refreshed by the auto updates of the Windows store… you can see where this is going?…

It would seem that this version of Centennial Office was installed on the disk image of these PCs and when full Office professional was installed on the machine, all is fine, until the (not in use) Centennial version of Office decides to repair itself overnight during the store update, corrupting the permissions on the Interop assemblies in the GAC, leaving them with only SYSTEM permissions, thus rendering them inaccessible to the end users.

Doing many of the aforementioned attempts to fix the issue, would overlay the correct permissions, but only until the Centennial version of Office went and undid the work overnight.

What is more annoying is that early on in the diagnostics for this issue I had a suspicion around this being the issue, but found no installer under, Start>>Apps and Features for  Microsoft Office Desktop Apps or Office, other than the pro copy. Also nothing in the programs and apps. Hence I’d assumed it can’t be the issue, wrongly as it turns out.

This seems to be an issue that has been hitting people from early 2019 through to 2020 when this blog post was written, although the OEMs were asked not the include the version of Office in the images anymore, so this should fade away over time.

It is a bit concerning that few people in forums seem to have found solutions, other than rebuilding the machine, but pushing on…

 

Removing Centennial Office

It seemed the obvious solution was to remove the store version of Office. It was not required and thus removing it should stop it overwriting the good install of Office. As noted before, there seemed to be no “uninstall” option anywhere in windows to get rid of the application. There were a few ways to remove this Office listed on internet forums, but the only one that worked for me was to launch power shell as an admin user. Then to run the following three commands, each in turn.

"(Get-AppxPackage -Name Microsoft.Office.Desktop).Dependencies | Remove-AppxPackage"
"Get-AppxPackage -Name Microsoft.Office.Desktop | Remove-AppxPackage"
"(Get-AppxPackage -Name Microsoft.Office.Desktop).Dependencies"

The final command should just confirm removal, by not returning anything.

Immediately it could be seen the assemblies had been removed from the “C:\Windows\assembly\GAC_MSIL\*” directories and after running the afflicted application, the issue would seem to be resolved. The professional office install was also repaired, via the Office Pro installer, for good measure.

A very annoying, obscure problem finally resolved!

 

References:

How do I uninstall Office 365 Windows Store version?

how to uninstall office 365 preloaded in windows 10

Why ‘Office in the Windows Store’ isn’t really Microsoft Office

Office.dll access denied to VSTO add-in

Installing Remote Server Administration Tools for Windows 10

$
0
0

Note to self, run following where the add features option in windows doesn't work. 

DISM.exe /Online /add-capability /CapabilityName:Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0 /CapabilityName:Rsat.BitLocker.Recovery.Tools~~~~0.0.1.0 /CapabilityName:Rsat.CertificateServices.Tools~~~~0.0.1.0 /CapabilityName:Rsat.DHCP.Tools~~~~0.0.1.0 /CapabilityName:Rsat.Dns.Tools~~~~0.0.1.0 /CapabilityName:Rsat.FailoverCluster.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.FileServices.Tools~~~~0.0.1.0 /CapabilityName:Rsat.GroupPolicy.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.IPAM.Client.Tools~~~~0.0.1.0 /CapabilityName:Rsat.LLDP.Tools~~~~0.0.1.0 /CapabilityName:Rsat.NetworkController.Tools~~~~0.0.1.0 /CapabilityName:Rsat.NetworkLoadBalancing.Tools~~~~0.0.1.0 /CapabilityName:Rsat.RemoteAccess.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.RemoteDesktop.Services.Tools~~~~0.0.1.0 /CapabilityName:Rsat.ServerManager.Tools~~~~0.0.1.0 /CapabilityName:Rsat.Shielded.VM.Tools~~~~0.0.1.0 /CapabilityName:Rsat.StorageReplica.Tools~~~~0.0.1.0 /CapabilityName:Rsat.VolumeActivation.Tools~~~~0.0.1.0 /CapabilityName:Rsat.WSUS.Tools~~~~0.0.1.0 /CapabilityName:Rsat.StorageMigrationService.Management.Tools~~~~0.0.1.0 /CapabilityName:Rsat.SystemInsights.Management.Tools~~~~0.0.1.0



Dynamics GP will not remove hold or why timestamp format in GP matters

$
0
0

If Dynamics GP will not let a SOP module, transaction process hold be removed in the user interface, it may be due to formatting of time stamps. Check no integrating applications have created that hold incorrectly. 

The issue is that GP is real fussy about formats of timestamps, one place time stamps can cause problems is in use with the SOP Transaction Hold table, the following query illustrates what you need to know about this. 

SELECT [PRCHLDID], [HOLDDATE], [TIME1]
FROM SOP10104
WHERE SOPNUMBE='YourDocumentNumber' AND SOPTYPE=2

This query gives us some data to look at, that is if some holds exist for that document supplied to the SQL query...

The first thing to notice is that GP often splits/spreads the Time and the Date components of the point in time over two fields, in this case the point in time is represented by HOLDDATE and TIME1. These two fields are of the sql data type "datetime", something that can catch out people when new to GP databases as it may not be totally obvious what is going on. 

Lets look at the first row, it is for the hold type "CREDIT" and represents the point in time, 3rd of May 2020 at eleven minutes past three and 31 seconds

  • It is interesting and important to note that for the HOLDDATE field, it has a date but see how the date has zeros for all the time elements of that HOLDDATE datetime value. To fill up the unused time component, the time has been defaulted to midnight. A default time of zero (midnight) is used, as the unused half of the datetime sql data type may not be set to null (we need all those bytes). 
  • Now also note how the TIME1 field value has a time component, but the date has been defaulted to a "default" of the 1st of January 1900. This date by convention is used to fill the date component of the time, as the unused half of a datetime data type may not null. 
  • Further, it is vital to note that the milliseconds of the time has been zeroed too

Now if you disobey these format rules, the GP user interface will do weird things. More often than not, it just doesn't do anything, that can be really perplexing as a problem solve because the records to the untrained eye look fine and similar to other records right next to them. It may be as simple as including milliseconds (not zeroing them) that can prevent a user from removing a hold in the GP user interface!

These time stamps are used in this way for many tables, where the same rules apply. It is very important to provide a correctly formatted time and date field if integrating or manually scripting against the GP database. This is a really common rookie mistake for developers working with GP, its easy to get lazy or not notice the formatting to be wrong. It can also change behaviour and totals, as time span calculations may not result in the same outcome if time components are left in date fields etc. Imagine the impact when totalling values over time spans, urgh.  

So to save some head scratching, and to make a GP compatible timestamp for the current time field in .NET use the following code snippet:

DateTime now = new DateTime(1900, 1, 1).Add(DateTime.Now.TimeOfDay);
DateTime ForFieldUse= now - new TimeSpan(0, 0, 0, 0, now.TimeOfDay.Milliseconds);

Don't be tempted to simplify this and try to access the .now in two places on the same statement as you may find the times have shifted enough between calls to cause the milliseconds not to zero correctly as two different values would be returned and be subtracted, not resulting in zero! Date times are immutable so you can't simply zero the milliseconds component either.

To get the date only value for use in GP date fields, you may simply use the in build .NET method ".Today" as shown in this snippet;Snippet

DateTime.Today;

 

 Hopefully this will help get back up and running after Googling the problem you've just encountered! - If so do comment, it keeps me motivated to blog more.

Dynamics GP VAT rate change and how to deal with it

$
0
0

8th July 2020 and the UK government have announced that temporary reduction of VAT rate from 20% to 5% for hospitality sector (on food, accommodation and attractions), from 15th July until 12 January 2021. This gives you only one week to get advice from your ERP support on how to set up your system for the new VAT rate and for some (depending on your services and product mix), this will involve for the first time having to deal with different rates of tax on the same documents. Some people are SO going to be regretting not originally setting up invoice reports etc to dynamically pull the tax rates at the line level! Consultants are going to be super busy this week just helping people through the basics of reconfiguring their system to work with the new temporary VAT rate. This is also for many going to involve some consultancy to put the rates back in January (depending on how self sufficient the GP site is).

What should you think about?

You must  check in your supporting systems and documents, such as excel, your GP reports and reporting services reports for places where VAT rates have been “hard coded” (written in rather than looked up from system). These will need addressing, by preferably doing the extra work to dynamically look them up at run time, or replace with more generic wording if appropriate (say in terms and conditions of sale where VAT rate may have be specified in words.) As time is short you may just have be less perfect and note where they all are and go change them and get ready to swap the old report for the new report/document on the day and back again in January, assuming the rate actually goes back then…

Next you have three issues

  • Ensure new transactions from 15th July get the new 5% rate
  • Dealing with existing transaction particularly SOP document that will be fulfilled/processed after the 15th July as they will retain the previous tax rate unless further work to recalc is performed.
  • Putting the rate back again in January 2021

 

You have some avenues to explore:

  • Introduce a totally new Tax Schedule and Tax Detail IDs for the new rates. On the night of the change, update all the records to point at the new tax detail ids.
  • Just add new Tax Detail ID then take old one out of existing schedule.
  • Change the value on the exiting Tax Detail ID (not recommended).
  • Turn on effective tax dates feature (version GP2010+) and configure the effective records for the date in question. If no effective date exists it will fall back to the base tax rate so this will work well.
  • Use the Regenerate Tax tool to update your existing transactions.
  • Use a macro to update each document (depending on your transaction volume, could be very painful)

Some of the above options, provide for a clearer picture than others when it comes to reporting your VAT particularly don’t just change the tax percentage of the exiting Tax Detail, this is bound to give you issues, so consider that too.

Dynamics GP may from version GP2010 (with extra product) use Date-effective Tax. This allows a date range to be stored against a different tax rates for tax details , the appropriate tax rate then being used by a transaction based on that date in respect of document/posting/tax date (as configured). This is extra module that needs installing prior to GP 2015 R2, but after that it should be waiting for you to enable in company settings, if you haven't already.

There is also the useful tool of “Regenerate Taxes” that allow existing transactions to have the tax values updated, once the detail id has been updated.

Andy in his blog goes through some of the options with older versions of GP: All Change on 20% Tax Again January 2011 this should be read together with  Changing SOP Tax Rates using the Support Debugging Tool.

 

The “Whats New In Dynamics GP2015r2 document” explains all this nicely as copied below:

Date effective tax

You can now enter the tax rates in advance for transactions, and calculate the taxes based on the rates specified for a date range. You can also recalculate the taxes for the saved transactions by modifying the specified tax rates based on the latest tax rates specified by the tax authorities.

You can mass modify the tax rates, and regenerate the taxes for the saved transaction batches.

Follow these steps to set up a date effective tax:

1. Open the Company Setup Options window.

Company Setup Dynamics GP Effective Tax Dates

2. Microsoft Dynamics GP >> Tools >> Set up >> Company >> Company >> Options

3. In the Company Setup Options window, mark the Enable Tax Date checkbox, to allow you to use the tax date option for calculating the tax rate for the transaction based on a tax date.

4. Mark the Use Date-effective Tax checkbox. Select the option to calculate the tax. The option that you select here is used for updating the taxes for saved transactions within the tax effective date range. Transactions that are corrected will be based on the date you select here. You can select the Document date, Posting date or the Tax date to calculate the tax.

NOTE: Select posting date if you want to calculate tax based on the posting date that you specify. Select Tax date if you want to calculate tax based on the tax date that you specify. Select Document date if you want to calculate tax based on the document date that you specify.

You must set up the date range for the specified Tax ID to calculate the tax amount or percentage for the transaction. Follow these steps to create the date range for the tax ID.

1. Open the Date effective Tax Rates window.

DAte EFfective Tax Rates Window Dynamics GP

GP Tax Detail Window with Date Effective Tax Rates

2. Microsoft Dynamics GP >> Tools >> Set up >> Company >> Tax Details >> Select the tax ID in the Tax Detail ID field to set up the date range for >> Date effective Tax Rates.

3. Specify the date range for the specified tax amount or percentage. You can mass modify the tax percentage of multiple tax IDs.

You can choose to modify the tax percentage or amount for the taxes detail or the tax type based on date range, or for the tax without any specifying any date range, or both. Follow these steps to mass modify the tax percentage.

1. Open the Mass Modify Taxes window. Microsoft Dynamics GP >> Tools >> Utilities >> Company >> Mass Modify Tax Percentage

2. Select the option to modify the tax percentage.

You can choose:

• Date effective Tax – To update the dates specified in the date range, and the percentage that you have specified in the Date effective Tax rates window. What’s New in Microsoft Dynamics Page14
• Tax details – To update the tax percentage that you have specified in the Tax Detail Maintenance window.
• Both – To update the tax percentage that you have specified in the Date effective Tax rates window, and in the Tax Detail Maintenance window.

3. Click Insert to view the list of the tax IDs listed for the modification. NOTE: Within the specified Tax IDs, if you do not want to modify any Tax ID, you can select the Tax ID record, and click Remove.

4. Click Modify to modify the listed Tax IDs based on your specifications. You can also regenerate taxes for the transaction batches with the updated tax percentage. You can regenerate taxes for all the saved transactions of all the modules (except GL) or the selected module, only if you mark the use date effective tax in the Company Setup Options window.

You can specify the modules and the batches for which you want the taxes to be regenerated, or regenerate the taxes for all the batches in all the modules (except GL) at one time.

Follow these steps to regenerate the taxes for the saved batch transactions.

1. Open the Regenerate Taxes window. Microsoft Dynamics GP >> Tools >> Company >> Utilities >> Regenerate Taxes

2. Select the module to regenerate the updated tax for the batches. You can choose All to update the tax for the saved batch transactions in all the modules (except GL). Or, you can choose the module to update the tax for the saved batch transactions in the selected module.

3. If you choose a particular module to update the batch transactions, specify the batches in the batch range fields.

4. Click Insert to view the list of the batches that will be updated with the modified tax percentage. NOTE: You can select a batch ID record and click Remove, to prevent updating the transactions with the modified tax percentage.

5. Click Process to recalculate the tax for the transaction in the module and batch specified.

 Important:

1. The tax calculation of a transaction will be overridden if there is a date effective tax rate that exist for any tax detail.

2. If the tax calculation routine does not find the rate for a particular date range, then the percentage in the tax detail maintenance will be taken.

3. For the receiving transaction entry, only shipment/Invoice will be considered for date effective tax calculation.

4. For the returns transaction entry, only return with credit and inventory with credit type of transactions will be considered for date effective tax calculation.

5. You can regenerate taxes for transactions when workflow is active for the Receivables Management Batch Approval, Payables Management Transaction and Payables Management Batch Approval.

 

References:

How to handle VAT rate changes in Dynamics GP (perhaps dated now)

Changing Tax Rates and Date Effective Tax Rates in Dynamics GP

What’s New in Microsoft Dynamics GP (see 2015 R2 section)

Changing SOP Tax Rates using the Support Debugging Tool

All Change on 20% Tax Again January 2011

SSMS Registered Servers–Key not valid for use in specified state

$
0
0

After exporting the registered servers from SQL Server Management Studio’s “Registered Servers window” and then importing the exported list into a new machine. You may encounter the following error if passwords were stored against the servers…

Registered Servers    Key not valid for use in specified state. (System.Security)
Registered Servers - Key Not Valid for use in specified state.


This is due to the encryption key used to encrypt the passwords being specific to the machine the server details were exported from and it not existing on the new machine.

Even more annoying, this message will pop up once for each server with an issue. Those servers will be absent from the registered servers window in the new machine SSMS.


When I got around to fixing this, the original machine was long gone, so re-exporting the server list without passwords and re-importing it was no longer an option (this would have also have worked).

To solve this problem needed to remove the password from the affected registered servers, once the registered server list is properly populated again I can then simply edit the server properties to put the password back in for each server affected.


This registered server list is populated from an xml file found in the following location on your machine:

C:\Users\{username}\AppData\Roaming\Microsoft\Microsoft SQL Server\{sql version}\Tools\Shell\RegSrvr.xml


On opening this file and examining the XML, the snag was that some servers had been added since the server import. Some of those servers were named the same as the ones causing the error. Scrolling through the XML in Notepad, it was not easy to tell which was which.

To help I went back into the registered server list in SSMS and added a distinct comment to all the servers I could see and exited again to updated the XML file. On opening the file again, I searched for “Password=” in the file and removed the section Password=”jdsalkfjdalksjd”;– where the random text is the password and the xml node was of type ConnectionStringWithEncryptedPassword. I removed the whole password parameter as shown by highlighting.

Note that the password property of the connection string was only removed if the description node shown below did not appear, or was present but had another value not containing the unique text we entered earlier in description fields (in this case the unique text was “**MyMarkerText**”).

<RegisteredServers:Description type="string">**MyMarkerText**</RegisteredServers:Description>

Save the file and launch SSMS. The error will not appear and the missing registered servers will be available again. You must now right click and edit the properties of each to manually restore the passwords.

Viewing all 195 articles
Browse latest View live