Some concepts gain more traction than others, and Unified Namespace (UNS) is one that stands out. Upon popular request: welcome to Part 5 on working with Industrial data.
Great insightful article, David and Willem! True, one can either use OPC UA or MQTT to implement a UNS. That said, use of MQTT Sparkplug specification has added advantages with things like auto-discovery and defined data types. Adding a link here to a blog that touches upon it: https://www.hivemq.com/blog/implementing-unified-namespace-uns-mqtt-sparkplug/
I think it is good to have auto discovery protocols such as Sparkplug, but in my honest opinion, I would say that working towards a UNS is 90% Culture/People/MasterData/... and 10% technology (having said that, the technology should definitely help us and not work against us ;))
With DataOps tools (aka Highbyte, Litmus, …) you can clean your « data in motion » before send data in the broker. Very good article, our architecture is very close to what you describe
Since last year I’m working with Timeseer.AI to clean and augment data at scale.
Can always share some experiences if you like (and I’m also curious to see how your landscape looks like). Feel free to drop me a note anytime via LinkedIn or david@itotinsider.com
Great insightful article, David and Willem! True, one can either use OPC UA or MQTT to implement a UNS. That said, use of MQTT Sparkplug specification has added advantages with things like auto-discovery and defined data types. Adding a link here to a blog that touches upon it: https://www.hivemq.com/blog/implementing-unified-namespace-uns-mqtt-sparkplug/
Thanks for taking the time to read and respond!
I think it is good to have auto discovery protocols such as Sparkplug, but in my honest opinion, I would say that working towards a UNS is 90% Culture/People/MasterData/... and 10% technology (having said that, the technology should definitely help us and not work against us ;))
(I could also live with 80-20 or 70-30 ;) )
The UNS = look at this: https://shorturl.at/JaX9L
for a revealing critic on the UNS by Alasdair Gilchrist
With DataOps tools (aka Highbyte, Litmus, …) you can clean your « data in motion » before send data in the broker. Very good article, our architecture is very close to what you describe
Thanks !
Since last year I’m working with Timeseer.AI to clean and augment data at scale.
Can always share some experiences if you like (and I’m also curious to see how your landscape looks like). Feel free to drop me a note anytime via LinkedIn or david@itotinsider.com