IOT Central is Microsoft’s low code, low effort, ease of use approach into the world of embedded projects. This is quite a demanding challenge, because real world problems tend to be complex and what can you do to make these simple in a tool? Well, normally you start with defining an environment, to get rid at least of some of the parameters and thus reducing complexity. This is a valid approach, but for a tool/service vendor it carries the danger that the overlap of your defined environment to common real-world use cases of customers, is not large enough, or, as a worst case, even not existing. Azure IOT Central, in the beginning, felt a bit like: great base features, but not enough to cover a complete project spectrum of demands. Therefore, to me it was good for samples or a quick POC for a project. However, the IOT Central team kept improving steadily and so the product is getting more serious as we speak.
The newest update provides some very interesting features, like jobs that can be execute on devices (very important for device management), webhook improvements looking at identity management, device templates to support IOT Plug & Play as well as improvements on the dashboard.
At least for me enough new stuff to justify a closer and serious re-visiting look into IOT Central!
Microsoft is providing a way to “modernize” older Windows CE applications by moving these onto Windows 10 IoT Core using a new feature called Windows CE App Containers. This is certainly well-intended, but customers should really double-check their use case, if it really makes sense to follow down that path, just to avoid ending in a cul-de-sac. As a former Windows Embedded MVP and Windows Embedded Silver Partner, I am very aware of the variety of CE applications existing and only in rare cases I would feel good with recommending to containerize an existing CE app to a customer.
If you feel the need to modernize an existing Windows CE system, there are several options you should consider first, depending on the nature of your application.
Here is a quick list of options that comes to my mind:
Hard real-time systems written in C or C++
Windows 10 IoT Core nor Enterprise are hard real-time-capable, due to Windows 10’s preemptive scheduler
Have a look into alternative hardware and operating systems from other vendors, or, quite interesting, Azure Sphere from Microsoft that supports hard real-time and is security hardened for IoT at the same time. It also includes support for the ThreadX real-time operating system (also recently acquired by Microsoft).
Normal UI or service applications written in C, C++, Java or .NET Compact Framework
Check, if these applications can be modernized by a new design leveraging Cloud technology! Candidates would be Azure IoT, Azure IoT Edge as well as serverless approaches such as Azure Functions and Logic Apps, looking at the Microsoft Azure ecosystem. Have in mind that nearly always, when modernizing applications, it does not make sense just to adapt to the newest technology level! Think about redesigning your processes, architecture and streamline end user experiences leveraging modern Cloud technologies!
Move your application onto cross-platform technologies such as .NET Core and ASP.NET Blazor! This often shakes off the chains of being bound to a certain hardware/OS combination and you ideally are able to grow a family of devices using the same software across different hardware devices and OSes.
Use a Cloud native, distributed architectural approach to be able to grow and advance your solution organically
Change the communication strategy in your solution from connected, directed calls (as it often is to be found in older applications) towards asynchronous, message based communication. This will add a lot of robustness and extensibility to your system!
Applications using certain Windows CE Apps or desktop features
Port your application to Windows IoT Enterprise, this will be the only path to be future proof, as App containers as well as IoT Core are going to be end of life at the end of this century.
There may be rare cases justifying a transition via CE App Container as a transition/bridge solution, but these must be thoroughly analyzed! App Container support is not just lift and shift and comes with at least “some” porting effort. Check if this effort really is as small, as the marketing department says, against possible porting/redesign efforts explained above. I always recommend 20% of the estimated porting costs as a threshold. If the to be expected containerizing effort is higher, go for redesign.
Keep in mind, that containerizing is only buying you time, you will need to port the app anyway!
Really large and complex applications, which are expensive to port
OK, the first mistake is to put such a large and complex application on a small embedded device running Windows CE! I am pretty sure, with this kind of application, you are having other troubles, such as performance and resource management problems on the device, as well.
The best thing is to port your application to a capable Windows 10 IoT Enterprise embedded PC system, right away. Do not waste money on a bridge solution, as it may cause additional problems and is not really suited to solve the existing ones. Redesign is a must, to make your app more manageable and fix existing issues!
Yes, there certainly are more approaches and arguments, but I think the ones laid out above cover most of the ground of this discussion.
If you need some ideas how to handle the transition in your specific use case or if there are other questions, just drop me a line and we will find a way to help you out!
SignalR. at least to me is an absolutely underrated technology, when it comes to real-time messaging. Disclaimer for my embedded readers: yes, at least, if you do not require millisecond response times – but that is seldom the case in standard applications.
What I really appreciate with this service is is the pub/sub messaging approach, which scales great and reduces dependencies between client and publisher to a minimum. All coming as PaaS in a serverless fashion! This is definitely worth a look!
Quite often new and innovative solutions require at least some technical effort. IOT systems e.g. need to be deployed , calibrated, provisioned with network and power, which requires quite some effort. Depending on the use case Live Video Analytics, a feature of Azure Media Services, might be able to reduce this effort. All you need is a camera and, ideally, an IOT Edge device connected to it. This is especially helpful in dynamic environments, such as delivery entrances, machine ports or storage racks, where a lot of different things are going on simultaneously. In these dynamic environments dedicated sensors are often hard to calibrate and locate. Video Analytics use AI models to detect motion, things and can even go down to detecting and reading text, such as numbers on license plates, addresses on parcels, etc. . Microsoft describes some of the interesting possibilities and scenarios quite good in this recent blog post.
What I like is that the AI video analytics models can be run on an edge device. This saves a lot of bandwidth and also keeps your eventually sensible video material on-premise! There are models available for use, but you can also build custom ones and thus create, adapt and fine tune the detection for a use case. Existing video streams also can be used for processing, which, in some cases, enables you to start right away focusing on implementing the IOT Edge solution, As the analytics models are able to create events to be consumed by an EventHub, they can be used as a publisher of triggers to build business solutions on. Use Azure Serverless capabilities and you have a sophisticated video analytics system for your use case up and running in days, or even only hours.
Microsoft teams have geared up, this time, of course, virtually, to present all the newest stuff from their development repositories. Looking at Azure IOT there is quite some interesting information in the pipe. Focus, this year, seems to be on Azure Sphere and Digital Twins, although they might come up with some new stuff, as well.
Quite often there are situations, where one needs dynamically changing information proactively delivered, as fast as possible. Stock market info is the poster child here, but there are quite a few other day-to-day use cases that also require close to real-time display of relevant data on many screens and across locations. Traffic information, factory line status, logistic tracking are to name as samples here.
Looking at the Microsoft technology stack, SignalR is the tool of choice to tackle these requirements. SignalR leverages several different approaches such as WebSockets, Server Sent Events or Long Polling, transparently for developers in the background, to deliver information needed in the best way possible. All based on the connectivity scenario / quality of a client application.
It is great to see that SignalR is now provided in “Serverless Mode” by Azure SignalR Service, which takes away scaling and infrastructure maintenance effort for this service from developers.
There are two ways to handle state in applications. One is to keep the state close to business logic (in-memory) , this is called “Stateful”. The other is to persist state somewhere in a store e.g. a SQL DB, Document DB or even in a Blob, away from business logic, which is called “Stateless”. Both variations have pros and cons, here the most striking ones:
Very fast access to state data
Straightforward to implement
Hard to use in scenarios with concurrent access
Persistence is not easy, especially, if persisted state needs to be up-to-date.
Difficult to synchronize with other systems (e.g. between Azure regions). This is especially worrisome in high availability / disaster recovery scenarios.
Does not really scale well in concurrency scenarios
Adds a lot of state handling logic to business logic, if you want to satisfy more complex scenarios such as session context, transactions or multi-tenancy.
Hard to debug
Data is volatile and therefore difficult to re-use
Memory use grows linear with amount of data, which might create application problems under high load
Re-use of stateful instances might get difficult/problematic
Persisted state can be accessed easily
Great in concurrency situations, if a suiting store (database) is used. This is because, the store handles access synchronization.
Good to debug. There is good tool support, for many stores
Session or transactional capabilities are quite often built into stores
Great data re-use options in other parts of the application
Good data synchronization capabilities, which enables robust HA/DR scenarios
Easy re-use of stateless components
Implementation not as straightforward, because of store access
Data access from business logic not as fast as stateful direct memory access
Requires an additional PaaS store, such as Azure SQL or Cosmos DB, bringing in additional infrastructure costs / component risks.
No wrong or right
Real life is not “black or white” and, due to this, a recommendation just to use one of these approaches certainly will not fit all use cases possible. However, in serverless applications a stateless approach should be favored, because it enables true flexibility, re-use and granularity without worrying about state handling. Stateful scenarios make especially sense, if one thinks of them as “cache”, which then ideally is backed by persisted data from a store and kept up-to-date via events or cache expiration. Azure Functions can only be used in a stateless fashion and also most available 3rd party connectors adhere to this paradigm.
Choosing a stateless store
As you most probably might guess, the choose of a data store does have quite an impact on a solution. We have been talking about Azure SQL, Cosmos DB and Blobs. Azure Tables should also be mentioned in this context. Looking at functionality, databases should be preferred over relatively raw storage solutions such as Blobs or Azure Tables. If your application is not a very simple one, or might grow, those stores do not provide functionality you might need over time.
Azure SQL and Cosmos DB do provide a lot of data handling functionality. Porting a SQL-based application to Azure Azure SQL might provide some good opportunities for code re-use (e.g. looking at stored procedures) from an existing system. Azure SQL has good data synchronization mechanisms with failover capabilities (single master) and provides great scale via partitioning or sharding of data.
If you are completely free to choose, I definitely recommend to have a look at Cosmos DB. Its data access performance is absolutely fantastic: I have seen read access durations of 1-2 milliseconds. It synchronizes instances around the globe configured by a mouse click and enables multi-master scenarios having different consistency levels. Additionally, Cosmos DB can be enhanced with powerful indexing and search capabilities offered by Azure Search and it provides connectors into the “Big Data” world, e.g. for Azure Databricks. There is one drawback with Cosmos DB, which is the higher price compared to Azure SQL or Azure Cloud Storage. In certain not sophisticated scenarios, where no cross-region synch, high speed data read and multi-master scenarios are not required, these higher costs may not be justified. Nevertheless, if your requirements are more demanding, Cosmos DB will be your friend! You will be able to compare higher Azure costs to implementation and infrastructure efforts saved! It should be taken into account that Microsoft needs to set up datacenters , networks and servers to provide the “Cosmos DB level of comfort” to developers, too.
There are quite a few more PaaS Services you can use, but, for a start, let us stay with those mentioned.
Sorting out, what to choose
The basic bricks in the Azure serverless construction kit are Azure Functions. They should be used to implement custom, pieces of functionality. It is important to have many functions, not a huge a single one that holds the complete logic of an application to enable flexible combinations as well as easy re-organization of those, to have good granularity as business requirements change.
Azure functions can call themselves directly, which is perfectly valid, or use the services of an orchestrator such as Logic apps or Flow to create workflows based on self-implemented Functions, 3rd party functionality offerings, which are pulled in via “connectors”, or the mixture of both. The later scenario most probably is the one that is desired for many business solutions: Use custom as well as 3rd party functionality bricks, to get things done fast and efficient. The difference between Flow and Logic apps is that Flow is an external service that is used to orchestrate selected functionality exposed by ones app through web hooks. Therefore, it is much like, the better known, If-This-Than-That (IFTTT) orchestrator and is ideally be used to enable anybody at the outside to use functionality exposed by your app, if desired. Due to this and the fact that normally nobody wants to expose all of an apps inner workings, Logic apps, as the Azure-native orchestrator, would be the best choice to be used for the development of a custom solution.
Leveraging other Cloud services
As already mentioned, there are a lot of 3rd party building blocks one can use e.g. to send mail or twitter messages with the help of connectors. Fortunately, there are also connectors for other Azure infrastructural parts and services, such as message queues. You could set up a classical Service Bus namespace, or rely on the Azure Event Grid global messaging infrastructure that is provided by Azure, as well. This enables a solution to work asynchronously, buffering peak loads or even surviving smaller outages, without loosing information.
What else do we need?
At the end of the day, we want to persist our data in a reliable store and here it comes in handy that Azure is offering blob, table and Azure SQL storage services, to be tailored to your needs. In this context Cosmos DB is one of the new shining stars, providing polyglot access in easily deployed, cross-regional, multi-master no-SQL DB store scenarios. This DB is really fast and unbelievable efficient. However, it is not one of the cheapest services. Another, more than important, pillar of a solution is identity. In Azure you are able to choose between Azure Active Directory, if your solution is facing company employees or Active Directory B2C, if real customers are using your application. Both directories provide state of the art security and identification mechanisms leveraging OAuth and OpenIdConnect standards. And no worries, it is also possible to use both within a solution to satisfy the needs of different roles. Example: employee as content provider using AAD – customer as content subscriber identified by AAD B2C!
Quite often solutions need to work with data, search, evaluate and recognize correct trends or do recommendations. At this point Azure intelligence backed by Azure Search, Cognitive Services and Machine learning comes into play. These services are able to cover a broad range of complexity, reaching from easy to use picture recognition, to highly demanding AI models to predict car prices. It is certainly a good idea to start with low hanging fruits, such as providing good search capabilities in your app, before delving into the deep seas of data lakes and analytics clusters to back continuously optimized AI models.
And yes, there is more …..
With the services described, we are already in for good start to create a first serverless solution. But, we have by far not seen all of the possibilities. Nevertheless, this is a good bridge-head to rest for the moment. Rome was not built in a day and we are going to make our way episode by episode through the jungle.
These apps can either be loaded from a storage location in the Cloud, for example an Azure Blob, which even can be distributed world-wide via a content delivery network (CDN) for high-speed loading, or a web application hosted on as an Azure App service. A favor for using an Azure App Service is the additional infrastructure regarding, development and deployment, easy scaling, security as well as monitoring, one gets from the Azure App platform service.
This development process is also not as fast as it was, for example in the past creating business Visual Basic applications, which had other issues, of course.
In this light it may be interesting that there is a new Microsoft NET Core technology on the horizon called Blazor, which enables developers to code web applications in a combination of HTML and C# with a full roundtrip development experience in Visual Studio. Blazer comes in two flavors or hosting models:
Hosted in an ASP.NET app service, running as server-sider code sending rendered HTML pages down to browser client. SignalR is used for interaction with backend.(available right now with ASP.NET Core 3.0)
Client app hosted in browser leveraging the new web assembly standard to be compatible wit all common browser engines. The local app can use web API calls or SignalR to communicate with the backend. (in the future available with .NET 5.0 – which includes ASP.NET Core – now in public preview)
We will see how it evolves, but the first impression with Blazor is quite promising.
Let us have a look at the classic solution architecture, everybody has been implementing for quite a few years and which worked (and still works!) in our company data centers.
Normally you will find three tiers of application layering:
So far, so good! This solution architecture works and has proven itself even in highest load scenarios, when implemented correctly and run in a capable data center. Of course there are the standard Cloud versus data center discussion points, but serverless is a bit different, because it is is not necessarily bound to any of these two. One could have a serverless approach in one’s own data center and eg. Azure Stack is Microsoft’s implementation of this approach (there is more to it, of course, but let us leave here for the moment). The main problems of 3-tier applications are:
A lot of infrastructure/plumbing code implementation required
Due to this tight coupling between infrastructure and code – to some extend even dependencies on hardware
Missing flexibility when it comes to change business logic
Many people and departments involved
Therefore lengthy processes required
Always combined tests for infrastructure and code required
These are just prominent ones and it all boils down to, that 3-tier architectures work, but require quite an investment and are not a quick and easy setup.
What is serverless doing different ?
From a company’s perspective it is only important to implement just the business logic and UI, all the other effort (having data centers, networks, servers and other hardware) is done to run 3-tier applications, because it is necessary, not because anybody really wants to. To get out of this dilemma, serverless technologies try to put infrastructure functionality into reusable building blocks, which can be assembled and used independently from the underlying infrastructure and hardware. With the arrival of the Cloud, this is also what makes it possible for a Cloud vendor to share his infrastructure easily between different customers by dissolving any dependencies between code and infrastructure/hardware as much as possible.
How does it look like, if one migrates a classic solution architecture into a serverless one? Well here an example (well, high-level ) , but I am going to delve into more of the details in my upcoming posts.
It is easy to see that there is not necessarily a change in the logical tiering, but in platform services (PaaS) or components, which are used in a dedicated tier. Through the re-use of components there may also be more than the 3 classical layers by combing the building blocks, such as Functions or Logic Apps, in a more granular way. The very striking benefit of this approach is that one has to write only application related code, which ideally is written in componentized form to mirror the building-block like approach of the PaaS services used. Anybody, who has played with Lego bricks growing up, may see an analogy, here.
The ease of use of these functional blocks , who have pre- built connectors into the rich functionality of Cloud services, enables an unprecedented flexibility and speed to adapt an application’s logic to changing market requirements.
As an example for business logic tier, the chain of functionality calls shown below can easily be newly arranged by re-organizing them in configuration, without writing code! Which is fast and cost saving.
This re-combination approach is not limited to business tier in serverless applications, but can be leveraged in all tiers of the solution, making it a fully multi-layered approach.
In the next post I am going to shed some light on the existing as well as new and exciting possibilities to create a serverless presentation layer for an application.