The project goal is to be able to determine which records needs updating.
Two SQL servers, each running Comp.Level 150. One has a view that returns the checksum of several parameters (Uid, DateTime, int, Varchar ..). This is compared with a table on the other server. When they differ, a change has happend. At least that’s the theory.
In SSMS they always match, whereas in C# (2022), running Dapper or Entity framework, the ref-server is always correct, whereas the src-view consistently returns the same false value.
(I don’t think this is a duplicate question. If so, I’m sorry)
If I compare the two checksums they differ at the middle bytes, eg.
Src: 6DC8 D3BB
Ref: 6DCC 03BB
1000: 8 1101: D
0110: C 0000: 0
Endian issue?
The CHECKSUM returns INT but I notice that some 64-bit is going on:
Src: FFFF FFFF B4DD DA5B
Ref: FFFF FFFF B4C4 0A5B
1101: D 1101: D
0100: 4 0000: 0
(I use Windows calc to convert, I trust it 80% but haven’t checked values)
Siggemannen: I know CHECKSUM means a risk of collisions and is deemed less reliable than HASHBYTES but it was not my decision.
Thom: In SSMS it’s a simple “select checkvalue from where id = @id” … and that’s the same script I use in dapper: var result = db.Query(script, param), public class checksummodel { public int checkvalue {get; set;}}
The view uses CHECKSUM(field1(int), field2(uniqueidentifier…) … fieldn()) as [checkvalue]. If you still want code, I will try to construct a generic demo that reproduces the thing.
CharlieFace: I have considered adding a fixed collation because it does indeed alter the result. That may be the key: perhaps SSMS makes some wizardry to accommodate this, ending up confusing me. I avoided this alteration because it means that I issue some 50k updates (which takes days). I’ll discuss it with my colleagues after the weekend. I do however find it odd that the checksums differ the way they do, only one nipple in the middle bytes.
6