You may have read from some of my previous blog posts that I am now heavily focussing my efforts and attention on the Office365DSC project. This project lets you define the configuration of your Office 365 tenants as code. This makes it easy for you to integrate your configurations with existing CI/CD DevOPS pipelines, and allows you to replicate configuration settings across multiple tenants.
Problem Statement
Not only is this project very ambitious and promising, it is also one of the first of its kind. Typical DSC modules aim at configuring a given physical server or component. You normally write the DSC configuration you wish to execute and then have it assigned to the Local Configuration Manager (LCM) of the machine you wish to manage so that it can go and perform local operations. DSC is therefore typically thought of being related to Infrastructure-as-a-Service (IaaS) to some degree. Office365DSC however doesn’t work that way; it configures a remote Office 365 tenant, which is considered to be Software-as-a-Service (SaaS). Under the cover it somehow still works the same: you need to assign the configuration to an LCM which then performs the configuration steps. These steps all involve making remote calls to the various Office 365 APIs. The challenge at hand is that for SaaS and PaaS, DSC needs to monopolize the LCM of a machine somewhere, even though that machine itself will not have anything applied to it. This creates the need to have “middle-man” agents and adds complexity to the architecture or our environments. Off course we can simply have that machine’s LCM execute the Configuration once to bring the remote tenant into the Desired State and then shut the machine down, but then we loose 50% of what DSC is all about: the monitoring and consistency check aspect.
DevOPS

In my opinion, configuration as code (DSC script) makes even more sense when integrated with DevOPS pipelines. System administrators make a code change and commit it back to Git or TFVC, the Continuous Integration (CI) pipelines copy these changes onto the server and the Continuous Delivery (CD) pipelines automatically apply the configuration change to the environment. In most demos I do at conferences that cover this topic, I normally have my Release CD pipeline upload the modified configurations onto Azure Automation DSC, and initiate a compilation job. Any machines that are connected to my Azure Automation DSC account and which use the affected configuration then automatically obtain the new bits and update their configurations. In the case of SaaS, this would also be feasible, but would still require a VM to be assigned to my Azure Automation DSC account, meaning we’d still have that “middle-man” agent effect.
With Azure DevOPS, we could always have the Release CD pipeline execute the configuration from an Build Agent directly. This would be the equivalent of DSC Push mode in a certain way. But Build Agents being stateless in nature, the moment the configuration has been applied, the agent would shut down which means we also lose all monitoring/consistency check capabilities. That is almost the same thing I described earlier where we use a machine’s LCM to execute the configuration once, wait for the remote tenant to be in the Desired State, and then free that LCM of the responsibility to keep that tenant in a Desired State. Might as well use traditional Sequential PowerShell scripts to configure my environment at that point.
Containers

Probably what is the most viable option for our scenario is the use of Containers to execute the configuration and ensure the monitoring/consistency checks. Until SharePointPnP adds support for PowerShell Core, we are still very limited in the choices of container images we can choose from. We need an image that runs WMF 4.0+. My recommendation is to run a Windows Server 2019 image (mcr.microsoft.com/windows:1809). As you can tell by the previous link, I personally use Docker for Windows to run my containers. With this approach, you can simply spin off different containers for each tenant you are trying to configure. You could also be leveraging Azure Container Instances to achieve the same thing in a complete cloud-hosted fashion.
Summary
As you can tell by the present article, we are still working on how to best position Office365DSC for customers out there, keeping the focus on reducing both the complexity requirements for the environment and the cost for users and their organizations. More details will be provided on this blog as we evolve our strategy. In the meantime, I would like to encourage the community to use the comments section below to initiate a constructive discussion thread around the topic!
I was pondering exactly this issue as I saw yesterdays post. We would always need a “deployment” machine to configure Office 365 which I thought was weird and you elaborated on exactly this point.
I think the whole DSC was made for local machines and for O365 configuration I would only deem the one-time configuration as “correct”. You don’t have any hooks into O365 whether a change happened and no official LCM like we do in Windows. Unless there is an official endpoint I would deem all solutions for monitoring as non-production-ready solutions (be it with an ACI, a VM running on Azure or a VM running locally).
I do believe DSC is a great concept and I do believe it might be a good thing for O365 so I’m really looking forward to the developments you make here.