Azure Digital Twins are the virtual counterparts of systems, sensors or even complete factories in the real world. The Digital-Twin concept has already around for a time and I have used it in several customer projects, to get a as-good-as real-time view on the state of complex systems. It comes with the additional benefit of having historical data, e.g. to follow up on errors, or predict the future with the help of machine learning algorithms. In addition, the ability to simulate and test possible future situations or different development scenarios with a close-to-reality model, cannot be overrated!
While these custom implementations are working great, it must be admitted that there is significant effort necessary to reach this goal. Due to this, I consider Azure Digital Twins as the arrival of a game changing platform service for future IOT solutions. Azure Digital Twins save a lot of development effort, are very good integrated with other Azure IOT offerings such as IOTHub, IOT Central and build on IOT Plug & Play. This is taking the fast lane ! It is a powerful combination of services, which are going to revolutionize the way IOT solutions will be built in the coming years. The good development story behind Twins is supported by great tools for visualizing and reporting. This is something often neglected by standard IOT approaches. Any neglection in this area is dangerous, because capable reporting and querying functionality is essential to run, maintain and evolve your solutions in field.
I predict the Azure Digital Twins will be seen quite often in upcoming solutions. 🙂
IOT Central is Microsoft’s low code, low effort, ease of use approach into the world of embedded projects. This is quite a demanding challenge, because real world problems tend to be complex and what can you do to make these simple in a tool? Well, normally you start with defining an environment, to get rid at least of some of the parameters and thus reducing complexity. This is a valid approach, but for a tool/service vendor it carries the danger that the overlap of your defined environment to common real-world use cases of customers, is not large enough, or, as a worst case, even not existing. Azure IOT Central, in the beginning, felt a bit like: great base features, but not enough to cover a complete project spectrum of demands. Therefore, to me it was good for samples or a quick POC for a project. However, the IOT Central team kept improving steadily and so the product is getting more serious as we speak.
The newest update provides some very interesting features, like jobs that can be execute on devices (very important for device management), webhook improvements looking at identity management, device templates to support IOT Plug & Play as well as improvements on the dashboard.
At least for me enough new stuff to justify a closer and serious re-visiting look into IOT Central!
Microsoft is providing a way to “modernize” older Windows CE applications by moving these onto Windows 10 IoT Core using a new feature called Windows CE App Containers. This is certainly well-intended, but customers should really double-check their use case, if it really makes sense to follow down that path, just to avoid ending in a cul-de-sac. As a former Windows Embedded MVP and Windows Embedded Silver Partner, I am very aware of the variety of CE applications existing and only in rare cases I would feel good with recommending to containerize an existing CE app to a customer.
If you feel the need to modernize an existing Windows CE system, there are several options you should consider first, depending on the nature of your application.
Here is a quick list of options that comes to my mind:
Hard real-time systems written in C or C++
Windows 10 IoT Core nor Enterprise are hard real-time-capable, due to Windows 10’s preemptive scheduler
Have a look into alternative hardware and operating systems from other vendors, or, quite interesting, Azure Sphere from Microsoft that supports hard real-time and is security hardened for IoT at the same time. It also includes support for the ThreadX real-time operating system (also recently acquired by Microsoft).
Normal UI or service applications written in C, C++, Java or .NET Compact Framework
Check, if these applications can be modernized by a new design leveraging Cloud technology! Candidates would be Azure IoT, Azure IoT Edge as well as serverless approaches such as Azure Functions and Logic Apps, looking at the Microsoft Azure ecosystem. Have in mind that nearly always, when modernizing applications, it does not make sense just to adapt to the newest technology level! Think about redesigning your processes, architecture and streamline end user experiences leveraging modern Cloud technologies!
Move your application onto cross-platform technologies such as .NET Core and ASP.NET Blazor! This often shakes off the chains of being bound to a certain hardware/OS combination and you ideally are able to grow a family of devices using the same software across different hardware devices and OSes.
Use a Cloud native, distributed architectural approach to be able to grow and advance your solution organically
Change the communication strategy in your solution from connected, directed calls (as it often is to be found in older applications) towards asynchronous, message based communication. This will add a lot of robustness and extensibility to your system!
Applications using certain Windows CE Apps or desktop features
Port your application to Windows IoT Enterprise, this will be the only path to be future proof, as App containers as well as IoT Core are going to be end of life at the end of this century.
There may be rare cases justifying a transition via CE App Container as a transition/bridge solution, but these must be thoroughly analyzed! App Container support is not just lift and shift and comes with at least “some” porting effort. Check if this effort really is as small, as the marketing department says, against possible porting/redesign efforts explained above. I always recommend 20% of the estimated porting costs as a threshold. If the to be expected containerizing effort is higher, go for redesign.
Keep in mind, that containerizing is only buying you time, you will need to port the app anyway!
Really large and complex applications, which are expensive to port
OK, the first mistake is to put such a large and complex application on a small embedded device running Windows CE! I am pretty sure, with this kind of application, you are having other troubles, such as performance and resource management problems on the device, as well.
The best thing is to port your application to a capable Windows 10 IoT Enterprise embedded PC system, right away. Do not waste money on a bridge solution, as it may cause additional problems and is not really suited to solve the existing ones. Redesign is a must, to make your app more manageable and fix existing issues!
Yes, there certainly are more approaches and arguments, but I think the ones laid out above cover most of the ground of this discussion.
If you need some ideas how to handle the transition in your specific use case or if there are other questions, just drop me a line and we will find a way to help you out!
SignalR. at least to me is an absolutely underrated technology, when it comes to real-time messaging. Disclaimer for my embedded readers: yes, at least, if you do not require millisecond response times – but that is seldom the case in standard applications.
What I really appreciate with this service is is the pub/sub messaging approach, which scales great and reduces dependencies between client and publisher to a minimum. All coming as PaaS in a serverless fashion! This is definitely worth a look!
Quite often new and innovative solutions require at least some technical effort. IOT systems e.g. need to be deployed , calibrated, provisioned with network and power, which requires quite some effort. Depending on the use case Live Video Analytics, a feature of Azure Media Services, might be able to reduce this effort. All you need is a camera and, ideally, an IOT Edge device connected to it. This is especially helpful in dynamic environments, such as delivery entrances, machine ports or storage racks, where a lot of different things are going on simultaneously. In these dynamic environments dedicated sensors are often hard to calibrate and locate. Video Analytics use AI models to detect motion, things and can even go down to detecting and reading text, such as numbers on license plates, addresses on parcels, etc. . Microsoft describes some of the interesting possibilities and scenarios quite good in this recent blog post.
Detect workers and cargo in a video stream
What I like is that the AI video analytics models can be run on an edge device. This saves a lot of bandwidth and also keeps your eventually sensible video material on-premise! There are models available for use, but you can also build custom ones and thus create, adapt and fine tune the detection for a use case. Existing video streams also can be used for processing, which, in some cases, enables you to start right away focusing on implementing the IOT Edge solution, As the analytics models are able to create events to be consumed by an EventHub, they can be used as a publisher of triggers to build business solutions on. Use Azure Serverless capabilities and you have a sophisticated video analytics system for your use case up and running in days, or even only hours.
Microsoft teams have geared up, this time, of course, virtually, to present all the newest stuff from their development repositories. Looking at Azure IOT there is quite some interesting information in the pipe. Focus, this year, seems to be on Azure Sphere and Digital Twins, although they might come up with some new stuff, as well.
Quite often there are situations, where one needs dynamically changing information proactively delivered, as fast as possible. Stock market info is the poster child here, but there are quite a few other day-to-day use cases that also require close to real-time display of relevant data on many screens and across locations. Traffic information, factory line status, logistic tracking are to name as samples here.
Looking at the Microsoft technology stack, SignalR is the tool of choice to tackle these requirements. SignalR leverages several different approaches such as WebSockets, Server Sent Events or Long Polling, transparently for developers in the background, to deliver information needed in the best way possible. All based on the connectivity scenario / quality of a client application.
It is great to see that SignalR is now provided in “Serverless Mode” by Azure SignalR Service, which takes away scaling and infrastructure maintenance effort for this service from developers.
Following the very efficient “low code/serverless” approach, I am always propagating, I want to recommend having a look at this interesting sample, showing how to-do real-time communication leveraging the Serverless SignalR Service in combination with Azure Functions.
Take it from me, SignalR is a great and fun technology to work with!
There are two ways to handle state in applications. One is to keep the state close to business logic (in-memory) , this is called “Stateful”. The other is to persist state somewhere in a store e.g. a SQL DB, Document DB or even in a Blob, away from business logic, which is called “Stateless”. Both variations have pros and cons, here the most striking ones:
Stateful
Pros
Very fast access to state data
Straightforward to implement
Cons
Hard to use in scenarios with concurrent access
Persistence is not easy, especially, if persisted state needs to be up-to-date.
Difficult to synchronize with other systems (e.g. between Azure regions). This is especially worrisome in high availability / disaster recovery scenarios.
Does not really scale well in concurrency scenarios
Adds a lot of state handling logic to business logic, if you want to satisfy more complex scenarios such as session context, transactions or multi-tenancy.
Hard to debug
Data is volatile and therefore difficult to re-use
Memory use grows linear with amount of data, which might create application problems under high load
Re-use of stateful instances might get difficult/problematic
Stateless
Pros
Persisted state can be accessed easily
Great in concurrency situations, if a suiting store (database) is used. This is because, the store handles access synchronization.
Good to debug. There is good tool support, for many stores
Session or transactional capabilities are quite often built into stores
Great data re-use options in other parts of the application
Good data synchronization capabilities, which enables robust HA/DR scenarios
Easy re-use of stateless components
Cons
Implementation not as straightforward, because of store access
Data access from business logic not as fast as stateful direct memory access
Requires an additional PaaS store, such as Azure SQL or Cosmos DB, bringing in additional infrastructure costs / component risks.
No wrong or right
Real life is not “black or white” and, due to this, a recommendation just to use one of these approaches certainly will not fit all use cases possible. However, in serverless applications a stateless approach should be favored, because it enables true flexibility, re-use and granularity without worrying about state handling. Stateful scenarios make especially sense, if one thinks of them as “cache”, which then ideally is backed by persisted data from a store and kept up-to-date via events or cache expiration. Azure Functions can only be used in a stateless fashion and also most available 3rd party connectors adhere to this paradigm.
Choosing a stateless store
As you most probably might guess, the choose of a data store does have quite an impact on a solution. We have been talking about Azure SQL, Cosmos DB and Blobs. Azure Tables should also be mentioned in this context. Looking at functionality, databases should be preferred over relatively raw storage solutions such as Blobs or Azure Tables. If your application is not a very simple one, or might grow, those stores do not provide functionality you might need over time.
Azure SQL and Cosmos DB do provide a lot of data handling functionality. Porting a SQL-based application to Azure Azure SQL might provide some good opportunities for code re-use (e.g. looking at stored procedures) from an existing system. Azure SQL has good data synchronization mechanisms with failover capabilities (single master) and provides great scale via partitioning or sharding of data.
If you are completely free to choose, I definitely recommend to have a look at Cosmos DB. Its data access performance is absolutely fantastic: I have seen read access durations of 1-2 milliseconds. It synchronizes instances around the globe configured by a mouse click and enables multi-master scenarios having different consistency levels. Additionally, Cosmos DB can be enhanced with powerful indexing and search capabilities offered by Azure Search and it provides connectors into the “Big Data” world, e.g. for Azure Databricks. There is one drawback with Cosmos DB, which is the higher price compared to Azure SQL or Azure Cloud Storage. In certain not sophisticated scenarios, where no cross-region synch, high speed data read and multi-master scenarios are not required, these higher costs may not be justified. Nevertheless, if your requirements are more demanding, Cosmos DB will be your friend! You will be able to compare higher Azure costs to implementation and infrastructure efforts saved! It should be taken into account that Microsoft needs to set up datacenters , networks and servers to provide the “Cosmos DB level of comfort” to developers, too.
There are quite a few more PaaS Services you can use, but, for a start, let us stay with those mentioned.
Sorting out, what to choose
The basic bricks in the Azure serverless construction kit are Azure Functions. They should be used to implement custom, pieces of functionality. It is important to have many functions, not a huge a single one that holds the complete logic of an application to enable flexible combinations as well as easy re-organization of those, to have good granularity as business requirements change.
Azure functions can call themselves directly, which is perfectly valid, or use the services of an orchestrator such as Logic apps or Flow to create workflows based on self-implemented Functions, 3rd party functionality offerings, which are pulled in via “connectors”, or the mixture of both. The later scenario most probably is the one that is desired for many business solutions: Use custom as well as 3rd party functionality bricks, to get things done fast and efficient. The difference between Flow and Logic apps is that Flow is an external service that is used to orchestrate selected functionality exposed by ones app through web hooks. Therefore, it is much like, the better known, If-This-Than-That (IFTTT) orchestrator and is ideally be used to enable anybody at the outside to use functionality exposed by your app, if desired. Due to this and the fact that normally nobody wants to expose all of an apps inner workings, Logic apps, as the Azure-native orchestrator, would be the best choice to be used for the development of a custom solution.
Leveraging other Cloud services
As already mentioned, there are a lot of 3rd party building blocks one can use e.g. to send mail or twitter messages with the help of connectors. Fortunately, there are also connectors for other Azure infrastructural parts and services, such as message queues. You could set up a classical Service Bus namespace, or rely on the Azure Event Grid global messaging infrastructure that is provided by Azure, as well. This enables a solution to work asynchronously, buffering peak loads or even surviving smaller outages, without loosing information.
What else do we need?
At the end of the day, we want to persist our data in a reliable store and here it comes in handy that Azure is offering blob, table and Azure SQL storage services, to be tailored to your needs. In this context Cosmos DB is one of the new shining stars, providing polyglot access in easily deployed, cross-regional, multi-master no-SQL DB store scenarios. This DB is really fast and unbelievable efficient. However, it is not one of the cheapest services. Another, more than important, pillar of a solution is identity. In Azure you are able to choose between Azure Active Directory, if your solution is facing company employees or Active Directory B2C, if real customers are using your application. Both directories provide state of the art security and identification mechanisms leveraging OAuth and OpenIdConnect standards. And no worries, it is also possible to use both within a solution to satisfy the needs of different roles. Example: employee as content provider using AAD – customer as content subscriber identified by AAD B2C!
Quite often solutions need to work with data, search, evaluate and recognize correct trends or do recommendations. At this point Azure intelligence backed by Azure Search, Cognitive Services and Machine learning comes into play. These services are able to cover a broad range of complexity, reaching from easy to use picture recognition, to highly demanding AI models to predict car prices. It is certainly a good idea to start with low hanging fruits, such as providing good search capabilities in your app, before delving into the deep seas of data lakes and analytics clusters to back continuously optimized AI models.
And yes, there is more …..
With the services described, we are already in for good start to create a first serverless solution. But, we have by far not seen all of the possibilities. Nevertheless, this is a good bridge-head to rest for the moment. Rome was not built in a day and we are going to make our way episode by episode through the jungle.
Basically any application you are not hosting on your own server can be considered serverless. It should not need one of your servers as backend, of course. Due to this, even desktop apps can be considered serverless, but, nowadays, the mainstream approach has shifted to have an application using a backend API in the Cloud. Desktop apps also have installation and security issues and have been replaced by apps loaded from a store that takes care (or at least should) that no malware can be installed on your mobile or desktop client. This approach works well for commercial apps, but for custom business solutions the store overhead for registration with a store provider, additional costs of testing, is avoided by delivering HTML/JavaScript based single page applications (SPAs) to employees. Especially, if we are talking about internal business solutions.
These apps can either be loaded from a storage location in the Cloud, for example an Azure Blob, which even can be distributed world-wide via a content delivery network (CDN) for high-speed loading, or a web application hosted on as an Azure App service. A favor for using an Azure App Service is the additional infrastructure regarding, development and deployment, easy scaling, security as well as monitoring, one gets from the Azure App platform service.
To develop SPAs profound knowledge of HTML and JavaScript and frameworks such as Angular, jQuery, React, to name just the prominent ones, is required, if you want to achieve good to great results.
This development process is also not as fast as it was, for example in the past creating business Visual Basic applications, which had other issues, of course.
In this light it may be interesting that there is a new Microsoft NET Core technology on the horizon called Blazor, which enables developers to code web applications in a combination of HTML and C# with a full roundtrip development experience in Visual Studio. Blazer comes in two flavors or hosting models:
Hosted in an ASP.NET app service, running as server-sider code sending rendered HTML pages down to browser client. SignalR is used for interaction with backend.(available right now with ASP.NET Core 3.0)
Client app hosted in browser leveraging the new web assembly standard to be compatible wit all common browser engines. The local app can use web API calls or SignalR to communicate with the backend. (in the future available with .NET 5.0 – which includes ASP.NET Core – now in public preview)
Quite a bit of this new approach still is in the making, but what looks compelling to me is the re-use of C# skills in the UI layer in combination with the excellent tool support through Visual Studio. It also promises some reduction of complexity and dynamics looking at the fast changing releases of the JavaScript frameworks, which, quite often, makes it a challenge to do future-proof decisions for larger teams in longer lasting projects. Blazor, therefore also has the potential to improve development speed significantly!
There is a componentized approach coming with Blazor, as well, which was introduced long time ago in various Microsoft technologies ranging from VB to Office and also has been implemented by JavaScript frameworks such as React. Razor components ( which is the preceding ASP.Net server-side framework) can be re-used in Blazor and, due to this, a real good 3rd party offering, for grids, chart and gauge displays, calendars, data and time pickers, and whatever one needs for an application, is already available.
We will see how it evolves, but the first impression with Blazor is quite promising.