Digital Transformation – a few things to consider

Nearly all businesses currently strive to get transformed digitally and I was able to support a few of them on their journey. Digital Transformation for a company, without a doubt, is more than important to reach a next level of efficiency, data insight, scalability and so on. These arguments, and many more, have been discussed a lot and at the first glance, all seems understandable and eventually not so hard to execute.
Nevertheless, if one starts to go on that digital transformation journey a lot of important and underestimated aspects come into view, that were not so obvious from the beginning.

More than just pushing a button – Digital Transformation

It’s foremost a cultural change, not a technological one
Quite often companies in the beginning of the transformation believe that starting the one Cloud-based solution project is already transforming their businesses. Of course, a single project, even if it may be a great start to find out how things work technologically, does not digitally transform a company. Therefore, quite often these projects end up as Cloud islands with little or limited impact on anybody’s benefit. The challenge to overcome to get successful, is to identify a company’s solution- and data-silos and make these accessible in a secure and transparent way to everybody, who could benefit from these.
Because, only then you are able to hoist the treasures of insight and information lying in this data. Breaking up these silos is not easy. Not, because it is technically hard to do, but, because of the mindset change by employees that is required, to make this possible. Established attitudes from the past, such as “I do have knowledge and data, therefore I am needed.” need to be changed into “I provide data and am able to share them most efficiently, along with insights.” This turns a lot of internal kingdoms into rubble and due to this, it is most likely to find significant resistance here.

Digital Transformation makes no sense without a company-wide perspective
As already stated, a single Cloud solution does not transform a company. It is the whole company that needs to migrate into a new mindset and a new digital eco-system. Think of it, just as installing a new operating system onto parts of the old hardware, while replacing outdated hardware components with fresh and shiny ones from the outside (aka Cloud). This requires spending some thoughts on, what the important components of your business are and what the best way to transform them would be.
Of course, all this needs to occur, without interrupting the current (well-running) business, to ideally enable a smooth transition. And yes, you guessed it, this is far from easy to do.
Nevertheless, it can be done, if you are well planned and prepared. You need to do a thorough inventory on what is currently going on, which enables you to decide on migration paths and migration priority for any of your businesses components.

Should the new company setup be Cloud-native or hybrid (combining Cloud and on-premise)?
Do not waste time on that, it is always hybrid! Meaning – that if you have an established business, there are always components that cannot or should not be transferred to the Cloud in a foreseeable future. There is nothing bad about it. Cloud technology does not have to be the only solution for all business problems. Be pragmatic and do what makes sense! Choose to move to Cloud where the largest benefits are to be expected.
In any case, set up the required infrastructure to support a sound hybrid Cloud scenario.

Get identity management right!
Setting up that hybrid Cloud scenario, you will automatically run into the identity management question. Should I sync all my users into the Cloud? Or, only a few chosen ones? But, is this secure?
It is more than wise to sync all your user into a Cloud directory. Everything else is pretty useless, looking at future applications.
If set up correctly, a Cloud identity store is more secure than an on-premise one, because of the enhanced monitoring and auditing capabilities in Cloud, along with the additional supervision of the dedicated security teams of Cloud providers.
These guys are fighting back attacks every minute. Especially as a small to mid-size company, you will never be able to match their expertise and execution level with your own resources!
But, let us get back to identity management. There must only be one and this is pretty important! If there are currently several identity systems in your enterprise, the first task is to consolidate these. Otherwise, you are going to end up in an identity synchronization nightmare, which will be a security nightmare at the same time.
After that, it is very important to keep on-premise and Cloud identities in sync, because looking at Cloud solutions the traditional means of enterprise protection, network perimeters, have lost their effectiveness. This is due to the fact that we are communicating on public networks most of the time, which can neither be controlled nor considered safe! As Cloud offerings grow, a lot of Software as a Service (SaaS) components will be part of a solution and with them, you have no or only limited ways to establish network perimeters.
Network perimeters, as a protection mechanism, in Cloud solutions normally are replaced by identity perimeters, who are able to span a complete solution context. You still may find network perimeters, such as V-Nets, for securing certain solution parts, e.g. micro service clusters, API management, etc., but, these cannot not play a role in securing an overall solution.
Therefore, to get security right for a Cloud solution, a sound identity management enabling robust endpoint protection is key. Endpoint protection is often referred to as “defense in depth”, which means, you do not care about protection of any networks, but you are relying heavily on protecting the different communication endpoints, a solution is using.

Outsmart your old data center
When migrating to Cloud, a thorough understanding of Cloud technology benefits, its strengths and weaknesses for your business solutions need to be established up-front. Educate your teams and experts up-front and get them fully aboard. Do not underestimate this challenge! Many transformation projects died, because teams did not know what to do in Cloud and were not convinced.
In addition, just copying over solution-VMs from an on-premise data center into Cloud (lift and shift) is seldomly an efficient and rewarding approach. Instead, try to re-think your business processes and solution aspects from the new angle of perspective, Cloud computing is offering. Ask yourself, having the power of the Cloud eco-system at hand, if you could simplify, streamline or even get rid of a business process in a migrated solution.
This is the spot, where you can really make the difference! Set up highly scalable, flexible and easy to change business processes that enable you to stay ahead of the competition.
To achieve this, your development teams need to re-think their traditional implementation approach. Classical enterprise solutions require clusters on VMs, load balancers, etc.. Try to get rid of as many of these components, by using the highly efficient low code/serverless approach, which is only possible in Cloud solutions.
And again, there will be some non-technical challenges to face, such as the “not invented here” paradigm, along with developer proudness “I want to build micro-services, I can fully control!” that need to be overcome.
As soon as the dust settles and the new approach can be applied, one will be able to experience a boost in efficiency, along with the ability to do real-time changes to business processes, which will become a huge competitive advantage!

Consider Digital Transformation as a journey!
You will not be able to achieve everything overnight. Some items will be implemented fast, some are going to take more time. Have a well-planned company-wide approach ready and implement the means and mechanisms to execute sound governance on all migration activities.
Governance is very important for all involved players to stay on the same board. Re-evaluate and re-think approaches and technologies pragmatically as-you-go and as technology evolves (it does this fast in Cloud), but keep your eyes on the target, transforming your company into a digital one. As more and more components reach the new eco-system, it will be not too hard for your teams, to initiate the next round of innovation.


Real-time Information Push with Serverless SignalR

Quite often there are situations, where one needs dynamically changing information proactively delivered, as fast as possible.
Stock market info is the poster child here, but there are quite a few other day-to-day use cases that also require close to real-time display of relevant data on many screens and across locations. Traffic information, factory line status, logistic tracking are to name as samples here.

Looking at the Microsoft technology stack, SignalR is the tool of choice to tackle these requirements.
SignalR leverages several different approaches such as WebSockets, Server Sent Events or Long Polling, transparently for developers in the background, to deliver information needed in the best way possible. All based on the connectivity scenario / quality of a client application.

It is great to see that SignalR is now provided in “Serverless Mode” by Azure SignalR Service, which takes away scaling and infrastructure maintenance effort for this service from developers.

Following the very efficient “low code/serverless” approach, I am always propagating, I want to recommend having a look at this interesting sample, showing how to-do real-time communication leveraging the Serverless SignalR Service in combination with Azure Functions.

Take it from me, SignalR is a great and fun technology to work with!



A heartfelt welcome to the Wechsler Consulting Cloud Campus section!

This is our news, training, information and workshop area.
In here, we want to help you through the information jungle around Cloud technologies. We also want to keep things as easy and understandable as possible and are currently joining forces to get more and valuable content onto our Campus.

If you have something interesting, you would like us to shed some light on, just let us know, by dropping a comment or sending an email at: info(at) .

How to handle state in serverless applications

There are two ways to handle state in applications. One is to keep the state close to business logic (in-memory) , this is called “Stateful”. The other is to persist state somewhere in a store e.g. a SQL DB, Document DB or even in a Blob, away from business logic, which is called “Stateless”. Both variations have pros and cons, here the most striking ones:



  • Very fast access to state data
    • Straightforward to implement


  • Hard to use in scenarios with concurrent access
  • Persistence is not easy, especially, if persisted state needs to be up-to-date.
  • Difficult to synchronize with other systems (e.g. between Azure regions). This is especially worrisome in high availability / disaster recovery scenarios.
  • Does not really scale well in concurrency scenarios
  • Adds a lot of state handling logic to business logic, if you want to satisfy more complex scenarios such as session context, transactions or multi-tenancy.
  • Hard to debug
  • Data is volatile and therefore difficult to re-use
  • Memory use grows linear with amount of data, which might create application problems under high load
  • Re-use of stateful instances might get difficult/problematic



  • Persisted state can be accessed easily
    • Great in concurrency situations, if a suiting store (database) is used. This is because, the store handles access synchronization.
    • Good to debug. There is good tool support, for many stores
    • Session or transactional capabilities are quite often built into stores
    • Great data re-use options in other parts of the application
  • Good data synchronization capabilities, which enables robust HA/DR scenarios
  • Easy re-use of stateless components


  • Implementation not as straightforward, because of store access
  • Data access from business logic not as fast as stateful direct memory access
  • Requires an additional PaaS store, such as Azure SQL or Cosmos DB, bringing in additional infrastructure costs / component risks.

No wrong or right

Real life is not “black or white” and, due to this, a recommendation just to use one of these approaches certainly will not fit all use cases possible. However, in serverless applications a stateless approach should be favored, because it enables true flexibility, re-use and granularity without worrying about state handling.
Stateful scenarios make especially sense, if one thinks of them as “cache”, which then ideally is backed by persisted data from a store and kept up-to-date via events or cache expiration.
Azure Functions can only be used in a stateless fashion and also most available 3rd party connectors adhere to this paradigm.

Choosing a stateless store

As you most probably might guess, the choose of a data store does have quite an impact on a solution. We have been talking about Azure SQL, Cosmos DB and Blobs. Azure Tables should also be mentioned in this context. Looking at functionality, databases should be preferred over relatively raw storage solutions such as Blobs or Azure Tables. If your application is not a very simple one, or might grow, those stores do not provide functionality you might need over time.

Azure SQL and Cosmos DB do provide a lot of data handling functionality.
Porting a SQL-based application to Azure Azure SQL might provide some good opportunities for code re-use (e.g. looking at stored procedures) from an existing system. Azure SQL has good data synchronization mechanisms with failover capabilities (single master) and provides great scale via partitioning or sharding of data.

If you are completely free to choose, I definitely recommend to have a look at Cosmos DB. Its data access performance is absolutely fantastic: I have seen read access durations of 1-2 milliseconds. It synchronizes instances around the globe configured by a mouse click and enables multi-master scenarios having different consistency levels. Additionally, Cosmos DB can be enhanced with powerful indexing and search capabilities offered by Azure Search and it provides connectors into the “Big Data” world, e.g. for Azure Databricks.
There is one drawback with Cosmos DB, which is the higher price compared to Azure SQL or Azure Cloud Storage. In certain not sophisticated scenarios, where no cross-region synch, high speed data read and multi-master scenarios are not required, these higher costs may not be justified.
Nevertheless, if your requirements are more demanding, Cosmos DB will be your friend! You will be able to compare higher Azure costs to implementation and infrastructure efforts saved! It should be taken into account that Microsoft needs to set up datacenters , networks and servers to provide the “Cosmos DB level of comfort” to developers, too.


Where to put serverless business logic???

Good question!
There are several platform services you can put a serverless application’s logic in Azure, but not all might suit your needs.

Let us have a look, there are:

There are quite a few more PaaS Services you can use, but, for a start, let us stay with those mentioned.

Sorting out, what to choose

The basic bricks in the Azure serverless construction kit are Azure Functions. They should be used to implement custom, pieces of functionality. It is important to have many functions, not a huge a single one that holds the complete logic of an application to enable flexible combinations as well as easy re-organization of those, to have good granularity as business requirements change.

Azure functions can call themselves directly, which is perfectly valid, or use the services of an orchestrator such as Logic apps or Flow to create workflows based  on self-implemented Functions, 3rd party functionality offerings, which are pulled in via “connectors”, or the mixture of both. The later scenario most probably is the one that is desired for many business solutions: Use custom as well as 3rd party functionality bricks, to get things done fast and efficient.
The difference between Flow and Logic apps is that Flow is an external service that is used to orchestrate selected functionality exposed by ones app through web hooks. Therefore, it is much like, the better known, If-This-Than-That (IFTTT) orchestrator and is ideally be used to enable anybody at the outside to use functionality exposed by your app, if desired.
Due to this and the fact that normally nobody wants to expose all of an apps inner workings, Logic apps, as the Azure-native orchestrator, would be the best choice to be used for the development of a custom solution.

Leveraging other Cloud services

As already mentioned, there are a lot of 3rd party building blocks one can use e.g. to send mail or twitter messages with the help of connectors. Fortunately, there are also connectors for other Azure infrastructural parts and services, such as message queues. You could set up a classical Service Bus namespace, or rely on the Azure Event Grid global messaging infrastructure that is provided by Azure, as well. This enables a solution to work asynchronously, buffering peak loads or even surviving smaller outages, without loosing information.

What else do we need?

At the end of the day, we want to persist our data in a reliable store and here it comes in handy that Azure is offering blob, table and Azure SQL storage services, to be tailored to your needs.
In this context Cosmos DB is one of the new shining stars, providing polyglot access in easily deployed, cross-regional, multi-master no-SQL DB store scenarios. This DB is really fast and unbelievable efficient. However, it is not one of the cheapest services.
Another, more than important, pillar of a solution is identity. In Azure you are able to choose between Azure Active Directory, if your solution is facing company employees or Active Directory B2C, if real customers are using your application. Both directories provide state of the art security and identification mechanisms leveraging OAuth and OpenIdConnect standards. And no worries, it is also possible to use both within a solution to satisfy the needs of different roles.
Example: employee as content provider  using AAD – customer as content subscriber identified by AAD B2C!

Quite often solutions need to work with data, search, evaluate and recognize correct trends or do recommendations. At this point Azure intelligence backed by Azure Search, Cognitive Services and Machine learning comes into play. These services are able to cover a broad range of complexity, reaching from easy to use picture recognition, to highly demanding AI models to predict car prices. It is certainly a good idea to start with low hanging fruits, such as providing good search capabilities in your app, before delving into the deep seas of data lakes and analytics clusters to back continuously optimized AI models.

And yes, there is more …..

With the services described, we are already in for good start to create a first serverless solution. But, we have by far not seen all of the possibilities. Nevertheless, this is a good bridge-head to rest for the moment. Rome was not built in a day and we are going to make our way episode by episode through the jungle.


Serverless UI in Azure

Basically any application you are not hosting on your own server can be considered serverless. It should not need one of your servers as backend, of course.
Due to this, even desktop apps can be considered serverless, but, nowadays, the mainstream approach has shifted to have an application using a backend API in the Cloud. Desktop apps also have installation and security issues and have been replaced by apps loaded from a store that takes care (or at least should) that no malware can be installed on your mobile or desktop client.
This approach works well for commercial apps, but for custom business solutions the store overhead for registration with a store provider, additional costs of testing, is avoided by delivering HTML/JavaScript based single page applications (SPAs) to employees. Especially, if we are talking about internal business solutions.

These apps can either be loaded from a storage location in the Cloud, for example an Azure Blob, which even can be distributed world-wide via a content delivery network (CDN) for high-speed loading, or a web application hosted on as an Azure App service.
A favor for using an Azure App Service is the additional infrastructure regarding, development and deployment, easy scaling, security as well as monitoring, one gets from the Azure App platform service.

To develop SPAs profound knowledge of HTML and JavaScript and frameworks such as Angular, jQuery, React, to name just the prominent ones, is required, if you want to achieve good to great results.

This development process is also not as fast as it was, for example in the past creating business Visual Basic applications, which had other issues, of course.

In this light it may be interesting that there is a new Microsoft NET Core technology on the horizon called Blazor, which enables developers to code web applications in a combination of HTML and C# with a full roundtrip development experience in Visual Studio.
Blazer comes in two flavors or hosting models:

  • Hosted in an ASP.NET app service, running as server-sider code sending rendered HTML pages down to browser client. SignalR is used for interaction with backend.(available right now with ASP.NET Core 3.0)
  • Client app hosted in browser leveraging the new web assembly standard to be compatible wit all common browser engines. The local app can use web API calls or SignalR to communicate with the backend. (in the future available with .NET 5.0 – which includes ASP.NET Core – now in public preview)

Quite a bit of this new approach still is in the making, but what looks compelling to me is the re-use of C# skills in the UI layer in combination with the excellent tool support through Visual Studio. It also promises some reduction of complexity and dynamics looking at the fast changing releases of the JavaScript frameworks, which, quite often, makes it a challenge to do future-proof decisions for larger teams in longer lasting projects. Blazor, therefore also has the potential to improve development speed significantly!

There is a componentized approach coming with Blazor, as well, which was introduced long time ago in various Microsoft technologies ranging from VB to Office and also has been implemented by JavaScript frameworks such as React.
Razor components ( which is the preceding ASP.Net server-side framework) can be re-used in Blazor and, due to this, a real good 3rd party offering, for grids, chart and gauge displays, calendars, data and time pickers, and whatever one needs for an application, is already available.

We will see how it evolves, but the first impression with Blazor is quite promising.


Serverless – What is the difference?

Classic business applications

Let us have a look at the classic solution architecture, everybody has been implementing for quite a few years and which worked (and still works!) in our company data centers.

Normally you will find three tiers of application layering:

  • Presentation layer
  • Business logic
  • Storage layer

Presentation Layer serves application content either as a web application hosted on the server or as a single page application (SPA) that calls into APIs on the web server. SPAs have become more and more popular due to their superior user experience and good support within the newest JavaScript frameworks, such as Angular and React. 
Business logic quite often is either a unique (monolithic) API application or is spread across separate microservices providing the technical implementation of use cases. Into these endpoints the web application calls to trigger process, get data etc.. 
Backend of a business application classically is a SQL database, which takes care on efficient data storage as well as data operations to serve the requesting application’s needs.

So far, so good!
This solution architecture works and has proven itself even in highest load scenarios, when implemented correctly and run in a capable data center. Of course there are the standard Cloud versus data center discussion points, but serverless is a bit different, because it is is not necessarily bound to any of these two. One could have a serverless approach in one’s own data center and eg. Azure Stack is Microsoft’s implementation of this approach (there is more to it, of course, but let us leave here for the moment).
The main problems of 3-tier applications are:

  • A lot of infrastructure/plumbing code implementation required
  • Due to this tight coupling between infrastructure and code – to some extend even dependencies on hardware
  • Missing flexibility when it comes to change business logic
  • Many people and departments involved
  • Therefore lengthy processes required
  • Always combined tests for infrastructure and code required

These are just prominent ones and it all boils down to, that 3-tier architectures work, but require quite an investment and are not a quick and easy setup.

What is serverless doing different ?

From a company’s perspective it is only important to implement just the business logic and UI, all the other effort (having data centers, networks, servers and other hardware) is done to run 3-tier applications, because it is necessary, not because anybody really wants to.   
To get out of this dilemma, serverless technologies try to put infrastructure functionality into reusable building blocks, which can be assembled and used independently from the underlying infrastructure and hardware.
With the arrival of the Cloud, this is also what makes it possible for a Cloud vendor to share his infrastructure easily between different customers by dissolving any dependencies between code and infrastructure/hardware as much as possible.

How does it look like, if one migrates a classic solution architecture into a serverless one?
Well here an example (well, high-level ) , but I am going to delve into more of the details in my upcoming posts.

It is easy to see that there is not necessarily a change in the logical tiering, but in platform services (PaaS) or components, which are used in a dedicated tier. Through the re-use of components there may also be more than the 3 classical layers by combing the building blocks, such as Functions or Logic Apps, in a more granular way.
The very striking benefit of this approach is that one has to write only application related code, which ideally is written in componentized form to mirror the building-block like approach of the PaaS services used. Anybody, who has played with Lego bricks growing up, may see an analogy, here.

The ease of use of these functional blocks , who have pre- built connectors into the rich functionality of Cloud services, enables an unprecedented flexibility and speed to adapt an application’s logic to changing market requirements.

As an example for business logic tier, the chain of functionality calls shown below can easily be newly arranged by re-organizing them in configuration, without writing code! Which is fast and cost saving.

This re-combination approach is not limited to business tier in serverless applications, but can be leveraged in all tiers of the solution, making it a fully multi-layered approach.

In the next post I am going to shed some light on the existing as well as new and exciting possibilities to create a serverless presentation layer for an application.


Going “serverless” will change your business!

Every 10 to 15 years a game changing technology comes along, not only to change perspective, but also the way business is done. 
For me, serverless computing is one of these game changers!

Yes, I know, the new technology does not yet fit all use cases and still has to mature, but I would like to argue that now is the right time to get familiar with it. Just because, this will give you a competitive advantage on everybody else, as soon as “serverless” becomes mainstream.

The term “serverless” is somehow misleading, because it does not mean that there are no servers involved to run your code. Of course there are! And plenty!

The big difference is that these servers run in the Cloud datacenters of vendors such as with Microsoft on Azure, who try to create a user experience that does shield as much infrastructure woes from customers as possible. Doing this, we, the customers, are able to focus on creating just the functionality required to handle our business logic.

Writing a business application one has not to worry about:

  • Creating a data center with building, HVAC and IT hardware
  • Power supply
  • Connectivity
  • Storage space
  • Scalability
  • Plumbing code between application layers
  • Physical servers (thus serverless)
  • Train IT personal
  • Technical support
  • Security (well, at least to some extend)

All you need is an internet connection! Nevertheless one should be aware, that this means “always on” for the Internet connection is a must! This is easy to achieve in metropolitan areas, but can be a challenge if you live further out in the country.

Going serverless a Cloud vendor takes the burden of nearly all the problems mentioned above, because it is easier and more efficient for them to do this at large scale. In return, a customers pays the vendor for the services and technologies offered, just as he uses them. In the end, and if prices are fair, this is a classical win-win situation for both.

One additional thing that stands out above all things for developers as well as subject matter experts is flexibility. Going serverless does not mean that one just moves the old classical datacenter into the Cloud and all is set. It does mean that you now are able to tackle a complete new fabric that offloads plumbing code and infrastructure pain from developers to focus on use cases and business rules.
Furthermore, it provides fast and easy access to change and adapt these implementations to the ever changing real life requirements as fast as possible.

Having this, doing real-time business applications is not a dream!!!

In the next weeks and months I am going to discuss and demonstrate different approaches to go serverless on Windows Azure, here on this blog.

I would be very happy, if You are willing to share the fun and join me on this journey!