PowerShell Web Access to Manage SharePoint

In this article we will cover how you can deploy the PowerShell Web Access Gateway onto one of your SharePoint server to allow remote users to perform remote PowerShell operations. PowerShell Web Access is a feature that was introduced back with Windows Server 2012, and which provides the users with a Web Application mimicking the local PowerShell console, allowing them to run remote PowerShell commands against a server.

PWAScreen

The idea here is that we wish to let the development team access some of the SharePoint cmdlets remotely for them to run reports and extract valuable information from the server without having the admin group act as a middle man. While we want to let the Dev team execute remote PowerShell cmdlets, we want to restrict the set of operations they can call upon to cmdlets that start with “Get-SP” as well as the “Merge-SPLogFile” cmdlet.

Overview of the Environment

Throughout this article I will be using a SharePoint farm built in Azure IaaS that is made up of 3 servers: 1 SQL, 1 Web Front-End, and 1 Application server. The domain used will be contoso.com, and a Security Group named “DevTeam” has been defined in the Active Directory to group all members of the Development team.

You only need to deploy the PowerShell Web Access Gateway to 1 server in your farm. In our case, we will be deploying it onto the Application server.

Servers

SP2013-SQL -> SQL Server 2012 R2

SP2013-WFE01 -> Windows Server 2012 R2

SP2013-APP01 -> Windows Server 2012 R2

Installing the PowerShell Web Access Feature

The first step involved in deploying the PowerShell Web Access onto a server is to activate the PowerShell We Access feature on the box. In our case, we will connect to the SP2013-APP01 server, which will be hosting the PowerShell Web Access application, and will be adding the feature onto it. The feature can be installed using two different methods:

Activating the Feature

Option 1 – Using PowerShell

To install the feature using PowerShell, simply execute the following line of PowerShell:

Install-WindowsFeature -Name WindowsPowerShellWebAccess -ComputerName localhost -IncludeManagementTools

Option 2 – Using the Server Manager

Your second option is to open the Server Manager console on the server and to go to the Add Server Roles and Features section. On the Features page, scroll down to the Windows PowerShell group, and expand it. Make sure you check the Windows PowerShell Web Access feature, click Next and then Install.

Features

Installing the Application

Now that the feature is activated, we need to install the Web Application. Upon activating the feature on the server, several PowerShell modules specific to the PowerShell Web Access have been deployed to the server. You can take a look at the new cmdlets that are now exposed for the feature by running the following line of PowerShell:

Get-Command *PSWA*

pswacmdlet

The cmdlet we are interested in is named Install-PswaWebApplication which will take care of deploying and configuring the Web Application endpoints in IIS. By default, that cmdlet will try to deploy the PowerShell Web Access Application under the default IIS website which runs on port 80. Since you are most likely going to be reserving port 80 for SharePoint Web Applications, I recommend you go in IIS and create a new Web Site and bind it to a different port. In my Case, I will be creating a custom Web Site called “PWA” which will be running on port 88.

PWAIIS

We are now ready to call the installation cmdlet by passing it the name of our newly created Web Site as parameter. Also, note that for my example, I will be passing in the -UseTestCertificate switch to the cmdlet, which will create and assign a Self-Signed Certificate as an SSL endpoint to my PowerShell Web Access Application. In a production environment, it is recommended that you assign your own SSL certificate to secure the connection between the client OS and the host running the PowerShell Web Access Application.

To go ahead and configure the application, simply execute the following line of PowerShell on the server:

Install-PswaWebApplication -UseTestCertificate -WebSiteName “PWA” -WebApplicationName “PWA”

Install

That’s it! We have now properly configured our PowerShell Web Access Gateway onto our server. To verify that the installation worked as expected, simply launch a new Browser instance and navigate to: https://localhost/Pwa/ You should be presented with the PowerShell Web Access Gateway login page as shown in the following screenshot:

PSALogin.PNG

Now, something to watch out for is that if you already have a Web Application that leverages SSL (running on port 443), you will have to change the SSL binding of your newly created IIS Web Site to use another port number to prevent conflicts. In my case, none of my SharePoint Web Application were using SSL, so there were no conflicts to be prevented.

Granting Permissions

The only way to grant access to the PowerShell Web Access Gateway to a user or to a group of users is to create an PswaAuthorizationRule. In a nutshell, a PswaAuthorization is a mapping between a user or group to a set of PowerShell permissions. In a certain way, this represents what the Just Enough Administration (JEA) feature is trying to achieve. Just like for JEA, it involves creating a custom PowerShell file that will define what permissions the users will have against our PowerShell Web Access Gateway.

If you remember correctly, our scenario was that we wanted to prevent the Development team of an organization from using any cmdlets that don’t have a name starting with “Get-SP*”. The way to do this in PowerShell is to declare what we call PowerShell Session Configuration files (PSSessionConfigurationFile). A PowerShell Session Configuration File has an extensions of .pssc and defines what permissions users inheriting this Configuration will have against the PowerShell runspace.

To create a new PSSessionConfigurationFile, you can simply call the following PowerShell line of code:

New-PSSessionsConfigurationFile -Path <path>

ConfigFile.PNG

This will automatically create your .pssc file in the specified folder. This file by default will contain the skeleton of what properties it is possible for you to define:

pssc

Define Allowed CMDLets

The file above is where we would define the list of cmdlets we wish to let members of the Dev team use via our PowerShell Web Access Gateway. If you scroll down in the newly created .pssc file, you’ll see a property named VisibleCmdlets that is commented out. Simply uncomment this line and replace it with the following:

VisibleCmdlets = ‘Get-SP*’, ‘Out-Default’, ‘Get-Command’, ‘Get-Member’, ‘Merge-SPLogFile’

This will ensure the users can use any cmdlets whose name starts with “Get-SP”, as well as the Merge-SPLogFile. Get-Command and Get-Member are self-explainatory and can help provide additional valuable information to the end-users. Out-Default is required for the results of cmdlets to be printed back into the PowerShell Web Access session. if you forget to mention it, and a user tries to call Get-Command for example, while the command will execute fine on the remote server, no results will be printed back to the end user.

Import the SharePoint PowerShell bits

Now this is where you really have to jump through hoops to get the process working as expected for a SharePoint environment. Any SharePoint administrator knows that in order for a PowerShell session to be able to leverage the SharePoint cmdlets, you need to load the SharePoint Snapins into your session by using the following line of PowerShell (launching the SharePoint Management Shell does it automatically for you in the background):

Add-PSSnapin Microsoft.SharePoint.PowerShell

So how are we to make sure this snapin is available to our remote users’ sessions in the PowerShell Web Access? Well, one thing is for sure, you don’t want to add “Add-PSSnapin” as part of the allowed cmdlets in your PSSessionConfigurationFile. If you do, then automatically, users calling the Add-PSSnapin cmdlet to import the SharePoint cmdlets will get access to all cmdlets defined in the Snapin, even if we only allowed the Get-SP* ones. This is due to the order of operations. By default, when launching a new PowerShell Web Access session, it loads the available modules, then applies the VisibleCmdlets parameter to filter our the list of available cmdlets in the session. If users load the SharePoint cmdlets after the session has been loaded, then the VisibleCmdlets filter is not applied on whatever is being loaded after the fact.So bottom line, do not allow “Add-PSSnapin” as a visible cmdlet.

Here is what we need to do instead. If you pay a closer look to your .pssc configuration file, you’ll see that it defines another commented property named “ModulesToImport”. Uncomment out this property and replace it by the following line:

ModulesToImport = “Microsoft.SharePoint.PowerShell”

Seems simple enough right? Well it is not. Our problem is that Microsoft.SharePoint.PowerShell is a Snapin, and not a Module. Even if the documentation says ModuleToImport can load snapin, it doesn’t work for the SharePoint Snapin. So what are we to do? Well, we’ll need to cheat PowerShell by creating a bogus SharePoint module!

Create a Fake SharePoint Module

By default, PowerShell registers all modules in C:\Program Files\WindowsPowerShell\Modules, so what we need to do is open Windows Explorer and navigate to that location. In there, create anew empty folder named Microsoft.SharePoint.PowerShell (you see where this is going). In that newly created empty folder, add a new empty file named Microsoft.SharePoint.PowerShell.psm1 and enter the following line of PowerShell in it:

Add-PSSnapin Microsoft.SharePoint.PowerShell -EA SilentlyContinue

FakeModule

Effectively what we are doing here, is cheat PowerShell into thinking it is loading a SharePoint module, making it load the .psm1 into the session, which in turns simply adds the Snapin to the session. Sneaky Sneaky!

Setting the Language Mode

The last thing remaining for our PowerShell Session Configuration File to be completed and secured is for us to restrict the PowerShell language components the users can use. By default, users will be able to declare variables and assign objects to them. You may not see this as an issue at first but think of the following scenario where a user defines a new variable called $web which he assigns an SPWeb object to by calling the following line of PowerShell:

$web= Get-SPWeb http://localhost

Because they have assigned the $web variable an object, they can leverage the power of the PowerShell language to make method calls onto that object. This means that there is nothing preventing them from calling the following lines of PowerShell:

$web = Get-SPWeb http://localhost

$web.Delete()

In summary, if we grant the users access to the full PowerShell object they can still call potentially dangerous methods on objects. In the example above, while we did our best to block the user from using the Remove-SPSite cmdlet, they can use a Get-* cmdlets to retrieve and object and then call the .Delete() method on it. Effectively this comes back to them having access to the Remove-SPSite cmdlet.

What we need to do to prevent this from happening is prevent them from leveraging the full PowerShell language in their PowerShell Web Access sessions. This is done by modifying the LanguageMode property in our .pssc configuration file and by setting its value to “NoLanguage”:

LanguageMode = “NoLanguage”

Full .pssc file

In summary, here is the full content of our .pssc PowerShell Session Configuration File we will be using in our example to restrict access to the Dev Team:

@{
SchemaVersion = ‘2.0.0.0’
GUID = ’78b552a2-34fa-43e5-b2b3-5a306907dc65′
LanguageMode = “NoLanguage”
SessionType = ‘Default’
VisibleCmdlets = ‘Get-SP*’, ‘Out-Default’, ‘Get-Command’, ‘Get-Member’, ‘Merge-SPLogFile’
ModulesToImport = “Microsoft.SharePoint.PowerShell”
}

Registering the PSSessionConfigurationFile

Once your .pssc file has been created, you need to register it in PowerShell. This is done by calling the following line of PowerShell:

Register-PSSessionConfiguration -Name “DevTeam” -Path <Path to the .pssc file> -RunAsCredentials <Farm account>

This will prompt you to confirm the credentials of your farm account, which is required to access the local farm remotely. Simply provide the requested credentials and accept the prompt to complete the registration of your custom PowerShell Session Configuration.

Register

Create the PowerShell Web Access Authorization Rule

We are almost there! The last thing left is to create the mapping between our Active Directory User Group and the custom PowerShell Session Configuration file we just created. This is done by adding a new PswaAuthorizationRule on the server. In our case, our user group in AD is named “contoso\DevTeam”, so in order to assign it permission to our custom DevTeam configuration file, we need to execute the following line of PowerShell and accept the prompt:

Add-PswaAuthorizationRule -ComputerName localhost -UserGroupName “Contoso\DevTeam” -ConfigurationName “DevTeam”

AddRule

Grant Local Permissions to the Remote Users

In order for your remote users to be able to connect to your PowerShell Web Access Gateway, they also need to be added to the local Remote Management Users group:

remoteperm

Otherwise they will be presented with an error stating “Access to the destination computer has been denied. Verify that you have access to the destination Windows PowerShell session configuration […]”

ErrorAccess.PNGConnect to the PowerShell Web Access Gateway

We are finally done. Everything is in place for your users to connect. In my case, I will be connecting as user Bob Houle (contoso\Bob.Houle) who’s part of the contoso\DevTeam group.

Navigate to the Gateway’s main page and provide the requested information (making sure you specify the name of the farm server onto which the PowerShell Web Access was deployed to). The most important section to fill in is hidden in the Optional connection settings section. It is the Configuration Name section in which you need to provide the name of the custom PowerShell Session Configuration we created (in our case DevTeam).

connect.PNG

Once connected, you should be able to run the Get-Command cmdlet to verify that you are only granted access to the cmdlets starting with Get-SP and to the Merge-SPLogFile one.

cmdlets.PNG

Enjoy!

 

Introducing Reverse DSC

Ever since becoming a Microsoft PowerShell MVP back in the summer of 2014, I have been heavily involved with various PowerShell Desired State Configuration (DSC) projects. The main initiative I have been involved with is the SharePointDSC module which is currently led by Brian Farnhill down in Australia. While my contributions to the core of the project have been limited, I have been spending numerous hours working on a new concept I came up with and which is very close to my heart. Reverse DSC is something I introduced back in late 2015 after spending some late night hours testing out my SharePointDSC scripts. It is the concept of extracting a DSC Configuration Script out of an existing environment in order to be able to better analyze it, replicate it or onboard it onto PowerShell DSC. Let me be very clear, this concept does not only apply to the SharePoint world; it applies to all software components that can be automated via DSC. I am of the opinion that this concept will be a game changer in the world of automation, and I strongly encourage you to read through this article to better understand the core concepts behind it.

Definitions

To get started, and to make sure we are all on the same page, let us define the following two terms:

  • Desired State: represents how we want a component to be configured. For example, the Desired State of a SharePoint Site (SPWeb) could be defining its title. The Desired State could in this case define that to be in its Desired State, a given Site needs to have a title of “Intranet”.
  • Current State: represents how a component is currently configured. In many cases the Current State can be the same as the Desired State, which is completely fine. PowerShell DSC aims at making sure that whenever the Current State is not equal to the Desired State, that we do everything in our power to bring the server node back in its Desired state.

Anatomy of a DSC Resource

Before we go any further, it is key to understand how DSC Resources work internally. Just as a refresher, a DSC Resource is responsible for configuring a specific component within a DSC module. For example, within the SharePointDSC module, the MSFT_SPWebApplication resource is responsible for configuring SharePoint Web Applications. Every DSC Resources are made of 3 core functions: Get-TargetResource, Test-TargetResource, and Set-TargetResource.

  • Set-TargetResource is the function responsible for bringing the server in its Desired State by configuring the given component represented by the resource. It is called on the initial configuration call (e.g. Start-DSCConfiguration for Push mode), and when the Local Configuration Manager (LCM) is in the ApplyAndAutocorrect mode and detects that the machine drifted away from its Desired State.
  • Get-TargetResource is the function responsible for analyzing what the current state is for the component represented by the DSC Resource.
  • Test-TargetResource is responsible for calling the Get-TargetResource function to obtain the current state, and compares it with the Desired State contained within the Local Configuration Manager. If it detects that the current state doesn’t match the Desired State, and the LCM is in ApplyAndAutocorrect mode, it will call the Set-TargetResource method to ensure the machine is brought back in its Desired State.

The figure above details the process of PowerShell DSC where the Local Configuration Manager is configured in ApplyAndAutocorrect mode. The LCM checks on a regular basis (defined by the Configuration Mode Frequency) to see if the server is still in its Desired State. To do so, it calls into the Test-TargetResource function. This function is aware of what the Desired State should be because it is stored in the LCM’s memory (use the Get-DSCConfiguration cmdlet to see what is in the LCM’s memory), but needs to call into the Get-TargetResource function to figure out what the current state is. Once that is done, the Test-TargetResource method has information about what both the Desired and Current states are and will compare them. If they are the same, we are done and we will check again later. If they differ, then we need to call into the Set-TargetResource method to try to bring the Current State back to being the same as the Desired State.

The Reverse DSC Concept

The magic of the Reverse DSC concept lies within the Get-TargetResource function. As explained in the section above, this function is responsible for obtaining information about the current state of the server node for a given component. So you may ask if the theory is that if, for example, I wanted to get information about all the Web Applications within my SharePoint environment, all I have to do is call into the Get-TargetResource function for the MSFT_SPWebApplication DSC Resource? Well, that is absolutely correct, and this is what Reverse DSC is all about. A Reverse DSC script is a dynamic PowerShell script that calls into the Get-TargetResource function for each DSC Resource contained within a DSC Module. In the case of SharePoint, that Reverse DSC script would be calling into the Get-TargetResource function for all DSC Resources listed in the following figure (note that the figure shows the components included in SharePointDSC v1.4).

The Reverse DSC script would then be responsible for compiling the current state of each DSC Resources into a complete DSC Configuration Script that would then represent the Current State of each components within our environment. If that ain’t cool, I don’t know what is!

Real-Life Usage

I am a Microsoft Premier Field Engineer, which means that most of my days are spent troubleshooting issues with my clients’ environments. When I came up with the idea of Reverse DSC, my main intent was to ask my clients to run the Reverse DSC script against their environment, and send me back the resulting DSC Configuration Script so that I can replicate their exact environment within my own lab to make it easier for me to troubleshoot their issues with my own set of tools. However, as it is often the case with any innovations, it ends up that the main use for it may be something totally different than what I originally anticipated. Here are some of the awesome real-life applications for Reverse DSC We can come up with:

  • Dev/Test: As mentioned above, one of the main use of Reverse DSC is to allow an as-is replica of an existing environment on-premises. Most organizations I work with don’t have good DEV and Quality Assurance environments that match their Production environment. Running the Reverse DSC script against the production environment will allow users to take the resulting scripts and create exact copies of that environment for DEV and Test purposes.
  • Azure Automation: Organizations that have an on-premises Production environment and that are looking at moving to the cloud (even if just for DEV/Test), can generate use the Reverse DSC script to generate the DSC Configuration matching their on-premises environment, and Publish it to Azure Automation to have Azure Virtual Machine created that will be an exact match of the on-premises environment.
  • Compare environments: How often have we heard the sentence: “It works on my machine!”. With Reverse DSC, we can now run the script against two environments and compare the resulting scripts to see what configuration settings differ between the two.
  • Documentation: While I don’t foresee this as being the most popular reason why organizations would be adopting Reverse DSC, it would still allow them to document (in DSC format) the configuration of an environment at any given point in time.
  • DSC On-boarding: This one is probably one of the key application for DSC adoption within an organization. Most companies today aren’t using DSC to automate the configuration of their environment and ensure they don’t drift away from the specified Desired State. By simply running the Reverse DSC script against an existing environment and then using the resulting script as its own Desired State Configuration script, will ensure the environment is now maintained by the DSC process. It is almost as if by running through this process you would tell the server: “Tell me what your Current state is. Oh and by the way, that Current State you just told me about has just become your Desired State”. By doing this, organization can then specify how the LCM should handle configuration drifts (ApplyAndMonitor or ApplyAndAutocorrect) and detect when the Current State (which is now also the Desired State) is drifting.

See it in Action

The Reverse DSC script for SharePoint is already a real thing. However it is still waiting final approval to become officially part of the SharePointDSC module. The following video shows the execution the Reverse DSC script against my SharePoint 2016 dev server.

Next Blog post in this series-> SharePoint Reverse DSC

SharePoint 2016 Feature Packs

Today at the Ignite conference in Atlanta, Microsoft shared more information about the vision for SharePoint. With SharePoint 2016, it is now possible for organizations to obtain and enable new features within their on-premises environments through the use of “Feature Packs”. In the past, we pretty much had to wait for Service Packs to be released before seeing new features make their way into the product. With Feature Packs, organizations can now activate new features directly into the on-premises product.

The first Feature Pack, scheduled to be made generally available in November of 2016, will introduce the following new features:

For IT Pros

  • Administrative logging: Allowing users to audit actions made in Central Administration;
  • MinRole Changes: Addition of new workloads to support small environments;
  • Unified Logging: Ability to combine logging from both on-premises and Office 365 environments;

For Users

  • OneDrive API Update: One Drive API 2.0 now available on-premises (allows for interaction with Drives and Items);

For Users

  • App Launcher Custom Tiles: Ability to add custom tiles to the App Launcher (waffle icon to left);
  • New OneDrive for Business UX: New User Experience in OneDrive for Business, matching the one introduced in Office 365 last year;
  • Hybrid Taxonomy: Allowing term stores to be unified between on-premises environments and Office 365;

box

Upgrade SharePoint 2010 Host Header Web Application to SharePoint 2013 Host-Named Site Collections

A customer of mine is upgrading their SharePoint 2010 farm to SharePoint 2013. As part of the upgrade process they also wish to convert their existing Host Header Web Application to Host-Named Site Collection. The client has 2 to 3 content databases per Web Application in their SharePoint 2010 environment. It is imperative that the URLs used to access the content do not change. The client also wants to keep the SharePoint 2010 look for the migrated sites, at least for a month after migration. Therefore the Host Header Web Appplication to Host Name Site Collection move is simply for administrative purposes.

Also, the client is not using the current root of their Host Header Web Application. So for example, there is no content if users were to browse to http://intranet.contoso.com. Content only exists in site collections under managed paths such as http://intranet.contoso.com/sites/TeamA. The present article covers the process you are required to follow if you wish to accomplish this migration.

Background Information

The current SharePoint 2010 farm hierarchy is as follow:

Web Application: http://intranet.contoso.com
Content Databases:

  • Intranet-Content-1
      Site Collections:

    • http://intranet.contoso.com/sites/Team A
    • http://intranet.contoso.com/sites/Team B
  • Intranet-Content-2
      Site Collections:

    • http://intranet.contoso.com/sites/Team C

In summary, the Host Header Web Application is located at “http://intranet.contoso.com”. This Web Application is servered by two content databases: Intranet-Content-1 which contains two site collections and Intranet-Content-2 which only contains 1 site collection.

***This article assumed you have a plain vanilla SharePoint 2013 server setup and ready to receive the 2010 content.

Step 1 – Create a Placeholder Web Application in SharePoint 2013

In order to bring our SharePoint 2010 Web Applications over SharePoint 2013 and convert them to Host-Named Site Collections, we first need to create a new Web Application without a Host Header that will act as a container for these Host Name Site Collections. This web application will not be serving any web request properly speaking, meaning that it’s root will never be accessed by our clients via the browser. We will also be creating a root site collection in this Web Application. This root site collection will never be used by users, it is simply there to ensure the requests to the server are properly processed. This “no host header” web application is also required for you to be able to properly run SharePoint add-ins (another topic for another day)..

The new Web application we will be creating will be running on port 80 and won’t be configured using a Host Header. Even if you have another Web Application running on port 80 in your SharePoint 2013 environment, that root Web Application has to be created without a host header and on port 80. I will be giving our new Web Application the name “Host Name Site Collections Container”.

To create our new Web Application, I will be using the following PowerShell line of code:

New-SPWebApplication -Name "Host Name Site Collections Container" -Port 80 -ApplicationPool "HNSC" -ApplicationPoolAccount (Get-SPManagedAccount "contoso\sp_farm")

container webapp

***Notice that our Web Application is created using Classic Authentication mode which is deprecated in SharePoint 2013. Do not worry, as part of our complete upgrade process, once all the Host Name Site Collections have been properly created, we will convert our Web Application to Claims Based Authentication. The convertion process will be covered in an upcoming blog post.

Step 2 – Migrate the SharePoint 2010 Content Databases to SharePoint 2013

The nextstep is to bring the SharePoint 2010 Content Databases over to your SharePoint 2013 server. In order to do this, we will be copying both the .MDF and .LDF files of our two content databases (Intranet-Content-1 and Intranet-Content-2) over the 2013 server. You can choose to copy a backup of the files, but in my case, I want to ensure no one can access the content from the SharePoint 2010 server while I’m in the process of doing the migration, so I will simply be dataching the databases from the SharePoint 2010 SQL server and closing all existing connections to it.

a) Detach the SharePoint 2010 Content Databases
Open SQL Server Management Studio and navigate to your Content Databases. Right click on the Intranet-Content-1 database and select Tasks > Detach.
DetachDB
When the dialog box appears, make sure you check the Drop Connections box, then click OK.
DBDrop
Repeat the process for all other content databases, in my case for Intranet-Content-2.

b) Copy the .MDF and .LDF Files
Now that our databases have been detached from our live SQL Server, we can move their associated files over to the SharePoint 2013 server. Find the path to your files, in my case they were located under “C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA“. Grab both the .mdf and .ldf file for each content database.
SelectDBFiles
Copy the files over to the SharePoint 2013 SQL server (in my case to C:\Data).
Files Copied

c) Attach the SharePoint 2010 Content Databases to the SharePoint 2013 SQL Server
Now that the files have been moved over to the SharePoint 2013 SQL Server, we need to attach them to it. Open SQL Server Management Server on the SharePoint 2013 SQL Server. In the Object Explorer panel, right-click on the Databases folder, and select “Attach…”.
AttachDB
In the “Attach Database” window that pops-up, click on the Add… button and browse to the Intranet-Content-1.mdf file we’ve copied over in the previous step. Select the .mdf file and click OK.
selectattachdb
Repeat the same process for all content databases, in my case for Intranet-Content-2. Once completed, you should see the content databases listed in the “Databases to attach:” section of the “Attach Databases” window. Ensure all the proper databases are listed and click “OK”.List attachdb
You should now see the SharePoint 2010 databases listed in the Object Explorer panel.
DBListedOE

d) Upgrade the SharePoint 2010 Content Databases to SharePoint 2013
Simply attaching the content databases to SQL Server is not enough fopr SharePoint to recognize these as content databases. We need to do a mount operation on our content databases in order for SharePoint to upgrade their schema to SharePoint 2013 and associate them with our temporary Web Application created at Step 1 above. To mount a content database onto a SharePoint 2013 farm, we need to use the following line of PowerShell code. Running the command will take a few minutes to complete, and PowerShell will display the upgrade percentage as it does the upgrade of the Database schema.

Mount-SPContentDatabase -Name "Intranet-Content-1" -WebApplication "Host Name Site Collections Container"

mountpercentage

Once the mounting process has completed, you’ll need to run the above PowerShell line of code for all other Content Databases, in my case for Intranet-Content-2.
UpgradedSites

Once completed, we end up with an upgraded SharePoint Web Application. Our site collections are available by navigating under our “Container” web application. For my environment, site collections can be accessed following this link: http://sp2013/sites/TeamA

However, these are not Host Name Site Collections, and more importantly, the URL to access them is not the same as it was in SharePoint 2010 which was one of our requirement for the upgrade.

3 – Create Root Site for our Host Name Site Collection

Remember that the client is never accessing the root of what used to be its SharePoint 2010 Host Header Web Application (http://intranet.contoso.com). In the 2013 world however, we need to create yet another “empty container”, this time a site collection, that will be a Host Name Site Collection serving the URL that used to belong to our Web Application http://intranet.contoso.com. This new site collection will be created directly at the following url: http://intranet.contoso.com

To create this new empty Host Name Site Collection, we will execute the following PowerShell lines of code:

$webApp = Get-SPWebApplication "Host Name Site Collections Container"
New-SPSite -Url "http://intranet.contoso.com" -HostHeaderWebApplication $webApp -OwnerAlias "contoso\sp_farm"

2010lookNew-HNSC

You may be wondering why is it you need to have a Site Collection created at the root of the http://intranet.contoso.com if I mentionned earlier that the client will never browse to this location. The reason for this empty site collection to exist is to be able to properly serve server resources to sites collections located under one of its managed path (e.g. http://intranet.contoso.com/sites/TeamsA). If this site does not exist, you will encounter an error that mentions that a site has to exist at the root when trying to create “sub-site collections” (under /sites).
NeedRoot

4 – Convert the Upgraded Site Collections to Host-Named Site Collections

a) Rename the Site Collection
Back in February 2015, Microsoft released a Cummulative Update for SharePoint 2013 that modifies the behavior of the SPSite.Rename method within the object model. This method can now be used to change the URL of a site collection to a host-header one. In order to be able to leverage this new change, your SharePoint 2013 requires you to be at least on build 15.0.4693.1001. For more information regarding this change, you can read the following Knowledge Base article https://support.microsoft.com/en-us/kb/2910928 (thanks to my colleague Roger Cormier for the info).

Now that we’ve made sure we have the proper patch level applied to our farm, we can go ahead and rename our site’s URL from being http://sp2013/sites/TeamA to being http://intranet.contoso.com/sites/TeamA. In order to achieve this, we will use the following lines of PowerShell code:

$site = Get-SPSite "http://sp2013/sites/TeamA"
$site.Rename("http://intranet.contoso.com/sites/TeamA")
IISReset

*** Note that the code above will cause an outage. Ideally, if you have multiple sites to rename at once you can proceed with the renaming and then simply run IISReset once.

a-2) Backup Path Based Site Collections
Another, less prefered option is to convert the upgraded site collections to Host Name Site Collection by doing a backup our site, and then importing it back in as a Host-Named Site Collection. To backup the sites, I will be using the Backup-SPSite PowerShell cmdlets and will be backing up my data in the c:\Data folder of my server.

Backup-SPSite http://sp2013/sites/TeamA -Path C:\Data\TeamsA.bak

Backup

b-2) Restore the Site Collections as a Host Name Site Collection
We are now down to the last step of our migration process, restoring the backed up site collection as host-named site collections. This is achieve by calling the following line of PowerShell code:

Restore-SPSite http://intranet.contoso.com/sites/TeamA -Path "C:\Data\TeamsA.bak" -HostHeaderWebApplication http://sp2013

Restore

c) Navigate to your New Host-Named Site Collection
You are now done. Open your browser and navigate to your new Host-Named Site Collection to ensure everything is working as expected.
Browse

Create New Site from Custom Web Template in Office 365 (SharePoint Online)

This week I am working for a customer who wants to develop a new solution that will allow users to create new SharePoint Online sites, based on a custom web Template, with a single click. After struggling for a few hours trying to find the proper way of achieving this using the SharePoint add-in model, I came up with a very simple solution that allows a user to automate the creation of SharePoint Online sites based on a custom web Template, using calls to the REST API.

We are all familiar with the default out-of-the-box templates (ex: STS#0), but custom web templates are a little different. The first thing you need to know when dealing with custom web templates, is that every one of them are provided a custom ID based on the following naming convention: <GUID>#<Template Name>. For example, assume you were to create a new site template and give it a title of “MasterTemplate”, the given Name for your Template could end up being something like “{2AA91D04-377B-431A-8D23-7424893F5CEB}#MasterTemplate”. The first part of the ID (before the ‘#’) is what we will need to pass to the REST method responsible for creating our new web.

Solution Overview

The solution we will be studying here is made up of two components. The first one will help us retrieve the actual ID of our custom Web Template. The second will be used to actually create the new site, using the retrieved custom Template. All of this will be achieved using a SharePoint-Hosted Add-In and by making REST calls using JavaScript.

For the purpose of this article, I went ahead in SharePoint Online and created a new site, which I’ve modified a bit so that it can be re-used over and over as a Template. I’ve cleaned all web parts from the landing page, and created two custom lists: a task list named “Team Tasks”, and an issue list named “Team Issues to Track”.
NewSite

I then went ahead and saved this site as a Template. If you don’t see this option in your site settings, make sure you have Scripting enabled for the given site collection (more info at https://support.office.com/en-us/article/Turn-scripting-capabilities-on-or-off-1f2c515f-5d7e-448a-9fd7-835da935584f). I named my Template (Help Desk Case).
SolutionGallery

Now that our custom Web Template is created and registered in our SharePoint Online Site Collection, we need to figure out what its ID is. To achieve this, I went ahead, opened Visual Studio, and created a new SharePoint Add-In. We will be using this Add-In to retrieve the ID of all of our existing Site Templates (including the custom ones), and display them to our users in a drop down list. The idea here is to query tthe following REST endpoint:
/_api/web/getavailablewebtemplates(lcid=1033, doincludecrosslanguage=true)

In the add-in default.aspx page, I have created a new empty DIV element with an ID of “divMain”. This empty div will be used to dynamically generate our drop down list of values contining information about all the available Web Templates. What our JavaScript code will do, is query the host web to retrieve the list of all available Web Templates, loop through each of them and add it to our dynamically generated Drop Down list. The option items in our drop down list will display the Title of each Web Template, but will have a value representing their internal ID.

SiteTmpl

The code used in the App.js file for our add-in to retrieve that list is the following:


'use strict';

ExecuteOrDelayUntilScriptLoaded(initializePage, "sp.js");

function initializePage()
{
var hostweburl;
var appweburl;
var __REQUESTDIGEST;

// This code runs when the DOM is ready and creates a context object which is needed to use the SharePoint object model
$(document).ready(function () {
hostweburl = decodeURIComponent($.getUrlVar("SPHostUrl"));
appweburl = decodeURIComponent($.getUrlVar("SPAppWebUrl"));
var scriptbase = hostweburl + "/_layouts/15/";

// load the executor script, once completed set the ready variable to true so that
$.getScript(scriptbase + "SP.Runtime.js",
function () {
$.getScript(scriptbase + "SP.js",
function () { $.getScript(scriptbase + "SP.RequestExecutor.js", getWebTemplates); }
);
}
);
});
function getWebTemplates()
{
var requestURL = appweburl + "/_api/SP.AppContextSite(@target)/web/getavailablewebtemplates(lcid=1033, doincludecrosslanguage=true)?@target='" + hostweburl + "'";
var executor = new SP.RequestExecutor(appweburl);

executor.executeAsync({
url: requestURL,
type: "GET",
headers: {
"accept": "application/json;odata=verbose"
},
success: function (data) {
var jsonObject = JSON.parse(data.body);
var results = jsonObject.d.results;
var s = $('<select id="ddlTemplate" />');
for(var i = 0; i < results.length; i++) { $('<option />', { value: results[i].Name, text: results[i].Title }).appendTo(s); } s.appendTo('#divMain'); }, error: function (xhr, status, error) { alert(JSON.stringify(xhr)); } }); } } jQuery.extend({ getUrlVars: function () { var vars = [], hash; var hashes = window.location.href.slice(window.location.href.indexOf('?') + 1).split('&'); for (var i = 0; i < hashes.length; i++) { hash = hashes[i].split('='); vars.push(hash[0]); vars[hash[0]] = hash[1]; } return vars; }, getUrlVar: function (name) { return jQuery.getUrlVars()[name]; } });

Now that we managed to retrieve all site templates for our SharePoint Online Site Collection, we need to work on the piece of our Add-in's code that will actually go and create the site based on the web Template we've selected from our drop down list. To achieve this, we will modify the default.aspx page of our Add-in to include a text box allowing the users to enter a title for their new site, and a button to initiate the site's creation. The default.aspx code for my solution looks like the following:


...
<asp:Content ContentPlaceHolderID="PlaceHolderMain" runat="server">

<strong>Title: </strong><input type="text" id="siteTitle" /><br />
<strong>Site Template: </strong>
<div id="divMain">

</div>
<input type="button" id="btnCreate" value ="Create Site" onclick="createSite" />

</asp:Content>
...

Now that the visuals are in place, we actually need to connect our button to the action that will create the new site. Based on the .NET markup above, we can see that my button is trying to call a JavaScript function named "createSite". One very important thing: When calling the REST API to initiate the creation of the new site, you should only pass the associated Web Template ID's prefix (what is before the '#' sign). For example, in my case, the ID of my Help Desk Case web template is In order to have the onClick event trigger, we need to add the following logic in our App.js file:


function createSite() {
var requestURL = appweburl + "/_api/SP.AppContextSite(@target)/web/webinfos/Add?@target='" + hostweburl + "'";
var siteTitle = $('#siteTitle').val();
var siteUrl = $('#siteTitle').val().replace(" ", "");
var templateID = $("#ddlTemplate").val().split('#')[0];
var jsonData = "{ 'parameters': { '__metadata': { 'type': 'SP.WebInfoCreationInformation' }, 'Title': '" + siteTitle + "', 'Url': '" + siteUrl + "', 'WebTemplate': '" + templateID + "'} }";

$.ajax({
url: requestURL,
type: "POST",
data: jsonData,
headers: {
"accept": "application/json;odata=verbose",
"content-type": "application/json;odata=verbose",
"X-RequestDigest": $('#__REQUESTDIGEST').val()
},
success: function () { alert("site Created"); },
error: function (xhr, status, error) {
alert(JSON.stringify(xhr));
}
});
}

Let's now compile and deploy our add-in. the user running your add-in should now be presented with a form similar to the picutre below, allowing them to select both out-of-the-box and custom web templates in SharePoint Online, and create new sites with a simple click. Once the site has been successfully created, the user will get a prompt. Off course there is a lot of validation stuff you should take care off yourself if you'd ever want to implement such a solution into production (check that URL doesn't have characters, etc.).
addin
My Case 1

You can get a copy of the files used in this article Here