Deploying a Multi-Server SharePoint Farm with PowerShell Desired State Configuration

In this article we will cover the process of writing a DSC Configuration with SharePointDSC and deploying it to multiple servers to create a SharePoint 2016 farm (note that this would also work for SharePoint 2013 with minor changes to the SPFarm block). We will be using a Push Refresh mode, meaning that the Configuration will be manually applied to our servers, and not using a Pull Server to centrally manage the configuration. Using this approach, if our configuration was to change, it would need to be manually re-pushed onto the servers in the farm. While I believe that in most cases, you will wish to use a Pull refresh mode along with an Enterprise Pull Server to manage your SharePoint deployments, we are using a Push mode to keep things simple for the sake of this article.

The article will cover a scenario I will be demonstrating over at SPTechCon Austin next week. In this demo, I have a very small multi-server SharePoint 2016 farm that is made up of one dedicated SQL Server 2016 (SPTechCon-SQL), one Web Front-End (SPTechCon-WFE1), and one Application Server (SPTechCon-APP1). The configuration script will be built on a separate machine named (SPTechCon-Pull) and will be remotely applied to the servers in my farm from that machine. The figure below gives you a complete overview of the landscape of my demo.

Every server in my farm is a Windows Server 2016 Datacenter instance. As mentioned above, SPTechCon-SQL has SQL Server 2016 installed on it, and both SharePoint boxes (SPTechCon-WFE1 and SPTechCon-APP1) have the SharePoint 2016 bits installed on them (PSConfig was not run, just the SP2016 bits were installed).

As part of the demo, I will be using the following DSC configuration script to deploy my farm:

Configuration SPTechCon-OnPrem
{
    Import-DSCResource -ModuleName SharePointDSC

    $farmAccount = Get-Credential -UserName "contoso\sp_farm" -Message "Farm Account"
    $adminAccount = Get-Credential -UserName "contoso\sp_admin" -Message "Admin Account"

	Node SPTechCon-WFE1
	{
        SPFarm SPFarm 
        { 
            DatabaseServer           = "SPTechCon-SQL"
            FarmConfigDatabaseName   = "SP_Config" 
            Passphrase               = $farmAccount 
            FarmAccount              = $farmAccount
            AdminContentDatabaseName = "SP_AdminContent" 
            PsDSCRunAsCredential     = $farmAccount
            ServerRole               = "WebFrontEnd"
            Ensure                   = "Present"
            RunCentralAdmin          = $true
            CentralAdministrationPort = 7777
        }

        SPManagedAccount FarmAccount
        {
            AccountName = $farmAccount.UserName
            Account = $farmAccount
            PsDSCRunAsCredential     = $farmAccount
            DependsOn = "[SPFarm]SPFarm"
        }

        SPManagedAccount AdminAccount
        {
            AccountName = $adminAccount.UserName
            Account = $adminAccount
            PsDSCRunAsCredential     = $farmAccount
            DependsOn = "[SPFarm]SPFarm"
        }

        SPServiceAppPool SharePoint80
        {
            Name = "SharePoint - 80"
            ServiceAccount = $adminAccount.UserName
            PsDSCRunAsCredential     = $farmAccount
            DependsOn = "[SPManagedAccount]FarmAccount"
        }

        SPWebApplication RootWebApp
        {
            Name = "RootWebApp"
            ApplicationPool = "SharePoint - 80"
            ApplicationPoolAccount = $adminAccount.UserName
            Url = "http://SPTechCon-WFE1"
            DatabaseServer = "SPTechCon-SQL"
            DatabaseName = "WebApp-SharePoint-80"
            Port = 80
            PsDSCRunAsCredential = $farmAccount
            DependsOn = "[SPServiceAppPool]SharePoint80"
        }

        SPSite RootSite
        {
            Url = "http://SPTechCon-WFE1"
            OwnerAlias = $adminAccount.UserName
            Template = "STS#0"         
            PsDSCRunAsCredential = $farmAccount
            DependsOn = "[SPWebApplication]RootWebApp"
        }
	}
    Node SPTechCon-APP1
	{
        SPFarm SPFarm 
        { 
            DatabaseServer           = "SPTechCon-SQL"
            FarmConfigDatabaseName   = "SP_Config" 
            Passphrase               = $farmAccount 
            FarmAccount              = $farmAccount
            AdminContentDatabaseName = "SP_AdminContent" 
            PsDSCRunAsCredential     = $farmAccount
            ServerRole               = "Application"
            Ensure                   = "Present"
            RunCentralAdmin          = $false
        }
        SPServiceInstance BusinessDataConnectivityServiceInstance
        {
            Name = "Business Data Connectivity Service";
            Ensure = "Present";
            PsDSCRunAsCredential = $farmAccount;
        }
    }
}
$ConfigData = @{
    AllNodes = @(
        @{
            NodeName = "SPTechCon-WFE1"
            PSDscAllowPlainTextPassword = $True
            PSDscAllowDomainUser = $true
        },
        @{
            NodeName = "SPTechCon-APP1"
            PSDscAllowPlainTextPassword = $True
            PSDscAllowDomainUser = $true
        }
    )
}
SPTechCon-OnPrem -ConfigurationData $ConfigData
Start-DSCConfiguration SPTechCon-OnPrem -Wait -Verbose -Force

In summary, that script will automatically create the SharePoint 2016 farm, assign the Web Front-End MinRole to SPTechCon-WFE1, create a Web Application on port 80, and a root site collection, add SPTechCon-APP1 to the farm with the Application MinRole and have it run the Business Connectivity Service. Nothing too complicated, again to keep the demo focused and concise. Now, if you take a look at the last two lines of the script, you see that we are passing ConfigurationData to our Configuration method to ensure passwords can be passed as plain text. In an enterprise context, you will normally want to encrypt the credentials using a certificate. In our case here, I omitted to specify a certificate for simplicity sake.

Let us now head onto our SPTechCon-Pull machine, which is noting but a windows 10 machine hosted on the same domain as our servers to be configured. You will need to make sure that this machine has the SharePointDSC modules installed on it, because it will be required in order for us to compile our MOF file from this server. Installing the module is as easy as running the following cmdlet if your machine has internet connectivity:

Install-Module SharePointDSC

If you machine doesn’t have internet connectivity, then you will need to manually copy the SharePointDSC module onto the machine, under C:\Program Files\WindowsPowerShell\Modules\.

Upon executing the script above, you will be prompted to enter the credentials for the Farm account, which is required to execute the configuration steps, as well as for the admin account’s credentials, which I use as the owner of my site collection. Using remoting, PowerShell will remotely contact the two SharePoint servers and initiate the configuration. After a few minutes, you should see that the execution completed (see figure below).

You should also be able to navigate to the site collection we created (in our case at http://sptechcon-wfe1/) or to central administration, which based on our script, is hosted on SPTechCon-WFE1 and exposed through port 7777.

What is important for you to take away from this example, is that you don’t have to physically go on each server node that are part of your DSC configuration script in order to push the configuration onto it. This can all be done remotely from a machine that is external to the farm, as long as that machine has the SharePointDSC bits installed on it.

PowerShell Web Access to Manage SharePoint

In this article we will cover how you can deploy the PowerShell Web Access Gateway onto one of your SharePoint server to allow remote users to perform remote PowerShell operations. PowerShell Web Access is a feature that was introduced back with Windows Server 2012, and which provides the users with a Web Application mimicking the local PowerShell console, allowing them to run remote PowerShell commands against a server.

PWAScreen

The idea here is that we wish to let the development team access some of the SharePoint cmdlets remotely for them to run reports and extract valuable information from the server without having the admin group act as a middle man. While we want to let the Dev team execute remote PowerShell cmdlets, we want to restrict the set of operations they can call upon to cmdlets that start with “Get-SP” as well as the “Merge-SPLogFile” cmdlet.

Overview of the Environment

Throughout this article I will be using a SharePoint farm built in Azure IaaS that is made up of 3 servers: 1 SQL, 1 Web Front-End, and 1 Application server. The domain used will be contoso.com, and a Security Group named “DevTeam” has been defined in the Active Directory to group all members of the Development team.

You only need to deploy the PowerShell Web Access Gateway to 1 server in your farm. In our case, we will be deploying it onto the Application server.

Servers

SP2013-SQL -> SQL Server 2012 R2

SP2013-WFE01 -> Windows Server 2012 R2

SP2013-APP01 -> Windows Server 2012 R2

Installing the PowerShell Web Access Feature

The first step involved in deploying the PowerShell Web Access onto a server is to activate the PowerShell We Access feature on the box. In our case, we will connect to the SP2013-APP01 server, which will be hosting the PowerShell Web Access application, and will be adding the feature onto it. The feature can be installed using two different methods:

Activating the Feature

Option 1 – Using PowerShell

To install the feature using PowerShell, simply execute the following line of PowerShell:

Install-WindowsFeature -Name WindowsPowerShellWebAccess -ComputerName localhost -IncludeManagementTools

Option 2 – Using the Server Manager

Your second option is to open the Server Manager console on the server and to go to the Add Server Roles and Features section. On the Features page, scroll down to the Windows PowerShell group, and expand it. Make sure you check the Windows PowerShell Web Access feature, click Next and then Install.

Features

Installing the Application

Now that the feature is activated, we need to install the Web Application. Upon activating the feature on the server, several PowerShell modules specific to the PowerShell Web Access have been deployed to the server. You can take a look at the new cmdlets that are now exposed for the feature by running the following line of PowerShell:

Get-Command *PSWA*

pswacmdlet

The cmdlet we are interested in is named Install-PswaWebApplication which will take care of deploying and configuring the Web Application endpoints in IIS. By default, that cmdlet will try to deploy the PowerShell Web Access Application under the default IIS website which runs on port 80. Since you are most likely going to be reserving port 80 for SharePoint Web Applications, I recommend you go in IIS and create a new Web Site and bind it to a different port. In my Case, I will be creating a custom Web Site called “PWA” which will be running on port 88.

PWAIIS

We are now ready to call the installation cmdlet by passing it the name of our newly created Web Site as parameter. Also, note that for my example, I will be passing in the -UseTestCertificate switch to the cmdlet, which will create and assign a Self-Signed Certificate as an SSL endpoint to my PowerShell Web Access Application. In a production environment, it is recommended that you assign your own SSL certificate to secure the connection between the client OS and the host running the PowerShell Web Access Application.

To go ahead and configure the application, simply execute the following line of PowerShell on the server:

Install-PswaWebApplication -UseTestCertificate -WebSiteName “PWA” -WebApplicationName “PWA”

Install

That’s it! We have now properly configured our PowerShell Web Access Gateway onto our server. To verify that the installation worked as expected, simply launch a new Browser instance and navigate to: https://localhost/Pwa/ You should be presented with the PowerShell Web Access Gateway login page as shown in the following screenshot:

PSALogin.PNG

Now, something to watch out for is that if you already have a Web Application that leverages SSL (running on port 443), you will have to change the SSL binding of your newly created IIS Web Site to use another port number to prevent conflicts. In my case, none of my SharePoint Web Application were using SSL, so there were no conflicts to be prevented.

Granting Permissions

The only way to grant access to the PowerShell Web Access Gateway to a user or to a group of users is to create an PswaAuthorizationRule. In a nutshell, a PswaAuthorization is a mapping between a user or group to a set of PowerShell permissions. In a certain way, this represents what the Just Enough Administration (JEA) feature is trying to achieve. Just like for JEA, it involves creating a custom PowerShell file that will define what permissions the users will have against our PowerShell Web Access Gateway.

If you remember correctly, our scenario was that we wanted to prevent the Development team of an organization from using any cmdlets that don’t have a name starting with “Get-SP*”. The way to do this in PowerShell is to declare what we call PowerShell Session Configuration files (PSSessionConfigurationFile). A PowerShell Session Configuration File has an extensions of .pssc and defines what permissions users inheriting this Configuration will have against the PowerShell runspace.

To create a new PSSessionConfigurationFile, you can simply call the following PowerShell line of code:

New-PSSessionsConfigurationFile -Path <path>

ConfigFile.PNG

This will automatically create your .pssc file in the specified folder. This file by default will contain the skeleton of what properties it is possible for you to define:

pssc

Define Allowed CMDLets

The file above is where we would define the list of cmdlets we wish to let members of the Dev team use via our PowerShell Web Access Gateway. If you scroll down in the newly created .pssc file, you’ll see a property named VisibleCmdlets that is commented out. Simply uncomment this line and replace it with the following:

VisibleCmdlets = ‘Get-SP*’, ‘Out-Default’, ‘Get-Command’, ‘Get-Member’, ‘Merge-SPLogFile’

This will ensure the users can use any cmdlets whose name starts with “Get-SP”, as well as the Merge-SPLogFile. Get-Command and Get-Member are self-explainatory and can help provide additional valuable information to the end-users. Out-Default is required for the results of cmdlets to be printed back into the PowerShell Web Access session. if you forget to mention it, and a user tries to call Get-Command for example, while the command will execute fine on the remote server, no results will be printed back to the end user.

Import the SharePoint PowerShell bits

Now this is where you really have to jump through hoops to get the process working as expected for a SharePoint environment. Any SharePoint administrator knows that in order for a PowerShell session to be able to leverage the SharePoint cmdlets, you need to load the SharePoint Snapins into your session by using the following line of PowerShell (launching the SharePoint Management Shell does it automatically for you in the background):

Add-PSSnapin Microsoft.SharePoint.PowerShell

So how are we to make sure this snapin is available to our remote users’ sessions in the PowerShell Web Access? Well, one thing is for sure, you don’t want to add “Add-PSSnapin” as part of the allowed cmdlets in your PSSessionConfigurationFile. If you do, then automatically, users calling the Add-PSSnapin cmdlet to import the SharePoint cmdlets will get access to all cmdlets defined in the Snapin, even if we only allowed the Get-SP* ones. This is due to the order of operations. By default, when launching a new PowerShell Web Access session, it loads the available modules, then applies the VisibleCmdlets parameter to filter our the list of available cmdlets in the session. If users load the SharePoint cmdlets after the session has been loaded, then the VisibleCmdlets filter is not applied on whatever is being loaded after the fact.So bottom line, do not allow “Add-PSSnapin” as a visible cmdlet.

Here is what we need to do instead. If you pay a closer look to your .pssc configuration file, you’ll see that it defines another commented property named “ModulesToImport”. Uncomment out this property and replace it by the following line:

ModulesToImport = “Microsoft.SharePoint.PowerShell”

Seems simple enough right? Well it is not. Our problem is that Microsoft.SharePoint.PowerShell is a Snapin, and not a Module. Even if the documentation says ModuleToImport can load snapin, it doesn’t work for the SharePoint Snapin. So what are we to do? Well, we’ll need to cheat PowerShell by creating a bogus SharePoint module!

Create a Fake SharePoint Module

By default, PowerShell registers all modules in C:\Program Files\WindowsPowerShell\Modules, so what we need to do is open Windows Explorer and navigate to that location. In there, create anew empty folder named Microsoft.SharePoint.PowerShell (you see where this is going). In that newly created empty folder, add a new empty file named Microsoft.SharePoint.PowerShell.psm1 and enter the following line of PowerShell in it:

Add-PSSnapin Microsoft.SharePoint.PowerShell -EA SilentlyContinue

FakeModule

Effectively what we are doing here, is cheat PowerShell into thinking it is loading a SharePoint module, making it load the .psm1 into the session, which in turns simply adds the Snapin to the session. Sneaky Sneaky!

Setting the Language Mode

The last thing remaining for our PowerShell Session Configuration File to be completed and secured is for us to restrict the PowerShell language components the users can use. By default, users will be able to declare variables and assign objects to them. You may not see this as an issue at first but think of the following scenario where a user defines a new variable called $web which he assigns an SPWeb object to by calling the following line of PowerShell:

$web= Get-SPWeb http://localhost

Because they have assigned the $web variable an object, they can leverage the power of the PowerShell language to make method calls onto that object. This means that there is nothing preventing them from calling the following lines of PowerShell:

$web = Get-SPWeb http://localhost

$web.Delete()

In summary, if we grant the users access to the full PowerShell object they can still call potentially dangerous methods on objects. In the example above, while we did our best to block the user from using the Remove-SPSite cmdlet, they can use a Get-* cmdlets to retrieve and object and then call the .Delete() method on it. Effectively this comes back to them having access to the Remove-SPSite cmdlet.

What we need to do to prevent this from happening is prevent them from leveraging the full PowerShell language in their PowerShell Web Access sessions. This is done by modifying the LanguageMode property in our .pssc configuration file and by setting its value to “NoLanguage”:

LanguageMode = “NoLanguage”

Full .pssc file

In summary, here is the full content of our .pssc PowerShell Session Configuration File we will be using in our example to restrict access to the Dev Team:

@{
SchemaVersion = ‘2.0.0.0’
GUID = ’78b552a2-34fa-43e5-b2b3-5a306907dc65′
LanguageMode = “NoLanguage”
SessionType = ‘Default’
VisibleCmdlets = ‘Get-SP*’, ‘Out-Default’, ‘Get-Command’, ‘Get-Member’, ‘Merge-SPLogFile’
ModulesToImport = “Microsoft.SharePoint.PowerShell”
}

Registering the PSSessionConfigurationFile

Once your .pssc file has been created, you need to register it in PowerShell. This is done by calling the following line of PowerShell:

Register-PSSessionConfiguration -Name “DevTeam” -Path <Path to the .pssc file> -RunAsCredentials <Farm account>

This will prompt you to confirm the credentials of your farm account, which is required to access the local farm remotely. Simply provide the requested credentials and accept the prompt to complete the registration of your custom PowerShell Session Configuration.

Register

Create the PowerShell Web Access Authorization Rule

We are almost there! The last thing left is to create the mapping between our Active Directory User Group and the custom PowerShell Session Configuration file we just created. This is done by adding a new PswaAuthorizationRule on the server. In our case, our user group in AD is named “contoso\DevTeam”, so in order to assign it permission to our custom DevTeam configuration file, we need to execute the following line of PowerShell and accept the prompt:

Add-PswaAuthorizationRule -ComputerName localhost -UserGroupName “Contoso\DevTeam” -ConfigurationName “DevTeam”

AddRule

Grant Local Permissions to the Remote Users

In order for your remote users to be able to connect to your PowerShell Web Access Gateway, they also need to be added to the local Remote Management Users group:

remoteperm

Otherwise they will be presented with an error stating “Access to the destination computer has been denied. Verify that you have access to the destination Windows PowerShell session configuration […]”

ErrorAccess.PNGConnect to the PowerShell Web Access Gateway

We are finally done. Everything is in place for your users to connect. In my case, I will be connecting as user Bob Houle (contoso\Bob.Houle) who’s part of the contoso\DevTeam group.

Navigate to the Gateway’s main page and provide the requested information (making sure you specify the name of the farm server onto which the PowerShell Web Access was deployed to). The most important section to fill in is hidden in the Optional connection settings section. It is the Configuration Name section in which you need to provide the name of the custom PowerShell Session Configuration we created (in our case DevTeam).

connect.PNG

Once connected, you should be able to run the Get-Command cmdlet to verify that you are only granted access to the cmdlets starting with Get-SP and to the Merge-SPLogFile one.

cmdlets.PNG

Enjoy!

 

Content Type Hub Packages

Every now and then I like to spend some time understanding the internals of some of the various components that make up SharePoint. This week, while troubleshooting an issue at a customer’s, I decided to crack open the Content Type Hub to see how it is exactly that Content Types get published down to subscriber site collections. In this article I will explain in details the process involved in publishing a Content Type from the Content Type Hub to the various site collections that will be consuming them.

First off, let us be clear. The Content Type Hub is nothing more than a regular site collection onto which the Content Type Syndication Hub feature has been activated on (site collection feature).

The moment you activate this feature onto your site collection, a new hidden list called “Shared Packages” will be created in the root web of that same site collection.

The moment you activate the feature, that list will be empty and won’t contain any entries. You can view the list by navigating to http://Content Type Hub Url/Lists/PackageList/AllItems.aspx.

However, the moment you publish a Content Type in the Hub, you will see an entry for that Content Type appear.
Publish a SharePoint Content Type
SharePoint Content Type Hub Package

In the two figures above, we can see that we have publish Content Type “Message” which has an ID of 0x0107. Therefore the entry that gets created in the Shared Packages list has that same ID value (0x0107) set as its “Published Package ID” column. “Pulished Package ID” will always represent the ID of the Content Type that was recently Published/Updated/Unpublished and which is waiting to by synchronized down to the subscriber site collections. The “Taxonomy Service Store ID” column, contains the ID of the Managed Metadata Service that is used to syndicate the Content Type Hub changes to the subscriber sites. In my case, if I was to navigate to my instance of the Managed Metadata Service that takes care of the syndication process and look at its URL, I should be able to see that its ID matches the value of that column (see the two figures below).

SharePoint Taxonomy Service Store ID
Managed Metadata Service Application ID

The “Taxonomy Service Name” is self explanatory, it represents the display name of the Manage Metadata Service Application instance represented by the “Taxonomy Service Store ID” (see the two figures below).

SharePoint Taxonomy Service Name
SharePoint Managed Metadata Service Application Name

The “Published Package Type” column will always have a value of “{B4AD3A44-D934-4C91-8D1F-463ACEADE443}” which means it is a “Content Type Syndication Change”.
SharePoint Published Package Type

The last column, “Unpublished”, is a Boolean value (true or false) that indicates whether or not the operation that was added to the queue is a Publish/Update, in which case that value will be set to “No”, or if it was an “Unpublish” operation, in which case the value would be set to “Yes”. The two figures below show the results of sending an “Unpublish” operation on a previously published Content Type to the queue.
SharePoint Unpublish a Content Type

Now what is really interesting, is that even after the subscriber job (Content Type Subscriber timer job) has finished running, entries in the “Shared Packages” list persist. In fact, these are required for certain operations in the Hub to work as expected. For example, when you navigate to the Content Type Publishing page, if there are no entries in the “Shared Packages” list for that content type, you would never get “Republish” and “Unpublished” as an option. The page queries the list to see if there is an entry for the given Content Type that proves it was published at some point before offering the option to undo the publishing.

To better demonstrate what I am trying to explain, take the following scenario. Go to your Content Type Hub and publish the “Picture” Content Type. Now once published, simply go back to that same “Content Type Publishing” page. You will be presented with only two options: Republish or Unpublish. The “Publish” option will be greyed out, because SharePoint assumes that because you have an entry in the Shared Packages list marked with “Unpublished = No”, that the Content Type has already been published. Therefore you can only “Republish” or “Unpublish” it. No navigate to the “Shared Packages” list and delete the entry for that Content Type. Once the entry has been deleted, navigate back to the Content Type Publishing page for the Picture Content Type. The “Publish” option is now enabled, and the “Republish” and “Unpublish” ones are disabled. That is because SharePoint couldn’t find a proof in the “Shared Packages” list that this Content type has been published in the past.

Also, if you were to publish a Content Type, and later unpublish it, you would see two entries in the “Shared Packages” list (one for publish one for unpublish). The Unpublish operation simply updates the existing entry in the “Shared Packages” list and sets its “Unpublished” flag to “Yes”.

If you were to create a Custom Content Type, publish it, and then delete it. SharePoint is not going to automatically remove its associated entry in the “Shared Packages” list. Instead, the next time the “Content Type Hub” timer job runs, it will update the associated entry to set its “Unpublished” flag to “false”. Meaning that we want to make sure that deleted Content Type never makes it down to the Subscriber Site Collections.

How does synchronization Works

By now you are properly wondering how the Synchronization process works between the Hub and the Subscriber Site Collections if entries are always persisted in the “Shared Packages” list. The way this process works is actually quite simple. The “Content Type Subscriber” timer job is the one responsible for that operation. By default that timer job runs on an hourly basis, and indirectly (via Web Services) queries the “Shared Packages” list to retrieve all changes that have to be synchronized. The root web of every Site Collection that subscribes to the Content Type Hub exposes a property called “metadatatimestamp” that represents the last time the Content Type Gallery for that given Site Collection was synchronized. The following PowerShell script can help you obtain that value for any given subscriber Site Collection.

$url = Read-Host "URL for your subscriber Site Collection"
$site = Get-SPSite $url
Write-Host $site.RootWeb.Properties["metadatatimestamp"]

When the “Content Type Subscriber” timer job runs, it goes through every Subscriber Site Collection, retrieve its “metadatatimestamp” value, and queries the “Shared Packages” list passing that date stamp. The queries then returns only the list of entries that have their “Last Modified” date older than that time stamp. Upon receiving the list of changes to sync, the timer job retrieves the Content Type information associated with the listed changes from the Hub and applies them locally to the Subscriber Site Collection’s Content Type gallery. Once it finished synchronizing a Site Collection, it updates its “metadatatimestamp” to reflect the new timestamp.

If you really wanted to, you can make sure that every single Content Type listed in the “Shared Packages” list be synchronized against a given Site Collection by emptying the “metadatatimestamp” property on that given site. As an example, when you create a new Site Collection, that root web won’t have that value set and therefore every Content Type that was ever published in the Hub will make its way down to that new Site Collection. Using the interface, you can also blank out that property by going to the Content Type Publishing page and selecting the option to “Refresh all published content types on next update”. All that this option does is empty the value for that property on the site.

Refresh all published content types on next update

Document Sets

Let’s now look at a more complex scenario (which is really why I started taking a closer look at the Publishing process in the first place). Let us investigate what happens if we publish Content Types that inherit from the Document Set Content Type. In my example, I’ve created a new custom Content Type named “DocSetDemo”. This custom Content Type, since it inherits from its Document Set parent, defines a list of Allowed Content Types. In my case, I only allow for “Picture” and “Image” as Allowed Content Types.
Allowed Content Types

The moment you go and try to publish this custom Content Type, you will get the following prompt, which let’s you know that every Allowed Content Types identified within your custom Content Type will also be published.

What that truly means is that not only is SharePoint going to create an entry in the “Shared Packages” list for your custom Content Type, it will also create one for every Allowed Content Type identified. In the figure below, we can see that after publishing my custom Content Type “DocSetDemo”, which has an ID of 0x0120D5200049957D530FA0554787CFF785C7C5C693, there are 3 entries in the list. One for my custom Content Type itself, one for the Image Content Type (ID of 0x0101009148F5A04DDD49CBA7127AADA5FB792B00AADE34325A8B49CDA8BB4DB53328F214) and one for the Picture Content Type (ID of 0x010102).

How to use the ReverseDSC Core

The ReverseDSC.Core module is the heart of the ReverseDSC process. This module defines several functions that will help you dynamically extract the DSC configuration script for each resource within a DSC module. The ReverseDSC Core is generic, meaning that it applies to any technology, not only SharePoint. In this blog article I will describe in details how you can start using the ReverseDSC Core module today and integrate it into your existing solutions. To better illustrate the process, I will be using an example where I will be extracting the properties of a given user within Active Directory using the ReverseDSC Core.

Getting Started

If you were to take a look at the content of the ReverseDSC.Core.psm1 module (https://github.com/NikCharlebois/SharePointDSC.Reverse/blob/master/ReverseDSC.Core.psm1), you would see about a dozen functions defined. The one we are truly interested in is the Export-TargetResource one. This method takes in two mandatory parameters, the name of the DSC resource we wish to “Reverse”, and the list of mandatory parameter for the Get-TargetResource of that same resource. The mandatory parameters are essential because without them, the Get-TargetResource is not able to determine what instance of the resource we wish to obtain the current state for. The third optional parameter lets you define a DependsOn clause in the case the current instance depends on another one. However, let us not worry about that parameter for our current example.

As mentioned previously, for the sake of our example, we want to extract the information about the various users in our Active Directory. Active Directory users are represented by the MSFT_xADUser resource, so you will need to make sure the xActiveDirectory module is properly installed on the machine you are about to extract the information from.

Let us now take a look at the Get-TargetResource function of the MSFT_xADUser resource. The function only requires two mandatory parameters: DomainName and UserName.

Therefore we need to pass these two mandatory parameters to our Export-TargetResource function. Now, in my case, I do know for a fact that I have a user in my Active Directory named “John Smith”, who has a username of “contoso\JSmith”. In my case, I have a local copy of the ReverseDSC.Core.psm1 module located under c:\temp I can then initiate the ReverseDSC process for that user by calling the following lines of PowerShell:

Import-Module -Name "C:\temp\ReverseDSC.Core.psm1" -Force
$mandatoryParameters = @{DomainName="contoso.com"; UserName="JSmith"}
Export-TargetResource -ResourceName xADUser -MandatoryParameters $mandatoryParameters

Executing these lines of code will produce the following output:

Since the Export-TargetResource function simply outputs the resulting DSC resource block as a string, you would need to capture it in a variable somewhere and manually build your resulting DSC configuration. The following modifications to our script will allow us to build the resulting Desired Configuration Script and save it locally on disk, in my case under C:\temp\:

Import-Module -Name "C:\temp\ReverseDSC.Core.psm1" -Force
$output = "Configuration ReverseDSCDemo{`r`n    Import-DSCResource -ModuleName xActiveDirectory`r`n    Node localhost{`r`n"
$mandatoryParameters = @{DomainName="contoso.com"; UserName="JSmith"}
$output += Export-TargetResource -ResourceName xADUser -MandatoryParameters $mandatoryParameters
$output += "    }`r`n}`r`nReverseDSCDemo"
$output | Out-File "C:\Temp\ReverseDSCDemo.ps1"

Running this will generate the following DSC Configuration script:

Configuration ReverseDSCDemo{
Import-DSCResource -ModuleName xActiveDirectory
Node localhost{
xADUser baf586dd-3c2c-4131-9267-d4d8fb1d5d01
{
CannotChangePassword = $True;
HomePage = "http://Nikcharlebois.com";
DisplayName = "John Smith";
Description = "John's Account";
Notes = "These are my notes";
Office = "Basement of the building";
State = "Quebec";
Fax = "";
JobTitle = "Aquatic Plant Watering";
Country = "";
Division = "";
Initials = "";
POBox = "23";
HomeDirectory = "";
EmployeeID = "";
LogonScript = "";
GivenName = "John";
EmployeeNumber = "";
UserPrincipalName = "jsmith@contoso.com";
ProfilePath = "";
StreetAddress = "55 Mighty Suite";
CommonName = "John Smith";
Path = "CN=Users,DC=contoso,DC=com";
HomePhone = "555-555-5555";
City = "Gatineau";
Manager = "CN=Nik Charlebois,CN=Users,DC=contoso,DC=com";
MobilePhone = "";
Pager = "";
Company = "Contoso inc.";
HomeDrive = "";
OfficePhone = "555-555-5555";
Surname = "Smith";
Enabled = $True;
DomainController = "";
PostalCode = "J8P 2A9";
IPPhone = "";
EmailAddress = "JSmith@contoso.com";
PasswordNeverExpires = $True;
UserName = "JSmith";
DomainName = "contoso.com";
Ensure = "Present";
Department = "Plants";
}
}
}
ReverseDSCDemo

Executing the resulting ReverseDSCDemo.ps1 script will generate a MOF file that can be used with PowerShell Desired State Configuration.

Recap

The ReverseDSC Core allows you to easily extract all the parameters of an instance of a resource by simply specifying the few mandatory parameters its Get-TargetResource function requires. It does not take care of scanning all instance on an environment, this should be left to the user to create that script. It is not to say that we will not be looking at ways of automating this in the future, but the current focus is to keep it as unit calls.

For a DSC Resource to work with the ReverseDSC Core, you need to ensure it has a well written Get-TargetResource function that returns the proper parameters. This should already be the case for any well written resources out there, but it is not always the case. In the past, most of the development effort for new DSC Resource was put on the Set-TargetResource function to ensure the “Forward” DSC route was working well. However, in order for the whole DSC process to properly work, it is crucial that your Get-TargetResource function be as complete as possible. After all, the Test-TargetResource function also depends on it to check whether or not your machine has shifted away from its desired state.

SharePoint Reverse DSC

If your SharePoint farm has internet connectivity, you can now install the SharePointDSC and all its prerequisites by simply running the following PowerShell line of code Install-Script SharePointDSC.Reverse

In my previous blog article I introduced the concept of Reverse DSC, which is nothing more than a dynamic way of extracting a Desired State Configuration (DSC) Script that represents the Current State of any given environment. In this blog article, I will guide you through the process of executing the Reverse DSC script against an existing SharePoint 2013 or 2016. Please note that while PowerShell v4 is supported by the SharePoint Reverse DSC, it is highly recommended that you upgrade your environment to PowerShell v5 to be able to fully leverage the goodness of the DSC engine.

While it is still not officially decided how the SharePoint Reverse DSC script will be distributed, I have taken the decision to go ahead and offer a temporary distribution via my Blog. A copy of the current script can be obtained here:

This version of the script currently supports the latest available bits of the SharePointDSC Module (1.5.0.0). The package is made of two files:

  • ReverseDSC.Util.psm1, the core ReverseDSC module which is generic to all DSC Modules (not just SharePoint)
  • SharePointDSC.Reverse.ps1, the main SharePoint specific PowerShell script responsible for extracting the current state of a SharePoint Environment.

As mentioned above, this script is optimized to run under an environment that has PowerShell v5. To determine what version of PowerShell you are using, simply run the following PowerShell command:

$PSVersionTable.PSVersion.Major

If you are running version 4, no worries, you can either upgrade to version 5 by downloading and installing the Windows Management Framework (WMF) 5.0 on your various servers (which will cause downtime). In case your organization is not yet ready to upgrade to WMF 5, you can either download and install the PackageManagement module for PowerShell 4, or simply manually install the SharePointDSC 1.5.0.0 module onto each of your servers. PackageManagement will simply be used to automatically download and install the proper version of the SharePointDSC module from the PowerShell gallery, assuming your server has internet connectivity (which it most likely won’t anyway).

How to Use

  1. Extract the content of the package onto one of the SharePoint server (Web Front-End or Application server). Make sure that both the .psm1 and .ps1 files are in the same folder.
  2. In an elevated PowerShell session (running as administrator), execute the SharePointDSC.Reverse.ps1 script.
  3. If you do not have the required version of the SharePointDSC module installed, you will be prompted to automatically download it or not. Note that this requires your server to have internet connectivity. (Note that I recommend you manually get the module v1.5.0.0. onto you server)
  4. When prompted to provide Farm admin credentials, simply enter credentials for any account that has Farm Admin privileges on your farm.
  5. The script may prompt you several times to enter credentials for various Managed Accounts in your environment. This is required in order for DSC to be able to retrieve any Password Change Schedules associated with your managed accounts. Simply provide the requested credentials for each prompt.
  6. The script will scan through all components supported by the SharePointDSC module and then compile the resulting DSC Configuration Script. Once finished, it will prompt you to specify the path to an existing folder where the resulting .ps1 DSC Configuration Script will be saved.
  7. The DSC Configuration Script will be saved with the name “SP-Farm.DSC.ps1” under the specified folder path. You can open the .ps1 file to take a close look at its content. the top comments section will provide insights about the Operating System versions, the SQL Server Versions, and all the patches installed in your farm.
  8. To validate that the Reverse DSC process was successful, simply execute the resulting SP-Farm.DSC.ps1 file. It will prompt you to pick a passphrase and will automatically compile a .meta.mof and a .mof file for each of the server in your farm.

Now that you have your resulting .MOF files, you can use them to replicate your environment to another location on-premises, upload the resulting .ps1 into Azure Automation to create a replica of your environment in the cloud, or on-board your existing environment onto DSC. The next Blog post in this series will go through the steps you need to take to on-board an existing SharePoint environment onto DSC using the ReverseDSC process.