Checking for valid Url using TryCreate

I’ve been working with Sharepoint online and needed to validate that a user is passing me a valid url.  So after doing some searching i found that [system.uri] had some methods that were useful.  In particular there is a method called TryCreate. This will take a url that you send to the method and let you know if it is a valid uri or not.

So first lets start with a url that we know is valid: Bing.com

TryCreate Expects Three values to be passed to it.

The first is a String or uri as you can see here.

The second is a System.urikind.   As you can see this is an enumeration that has three possible values, Absolute, Relative, and RelativeOrAbsolute

The third is a [ref]erence.  What this means is I must declare a variable for the return result to be put into after the method evaluates what was passed to it.

trycreate

Here is what that looks like in practice. Note I must use [ref] for my return result as it is used as a reference to get the results into.

$url = 'http:\\bing.com'
$kind = 'RelativeOrAbsolute'
$return = $null

[system.uri]::TryCreate($url,$kind,[ref]$return)

Now if I run this I’ll get the following output:

 $kind = 'RelativeOrAbsolute'
$return = $null

[system.uri]::TryCreate($url,$kind,[ref]$return)
True

Now if I look at my return variable we’ll notice that it has a full object with values.

 PS Z:\> $return


AbsolutePath : /
AbsoluteUri : http://bing.com/
LocalPath : /
Authority : bing.com
HostNameType : Dns
IsDefaultPort : True
IsFile : False
IsLoopback : False
PathAndQuery : /
Segments : {/}
IsUnc : False
Host : bing.com
Port : 80
Query : 
Fragment : 
Scheme : http
OriginalString : http:\\bing.com
DnsSafeHost : bing.com
IdnHost : bing.com
IsAbsoluteUri : True
UserEscaped : False
UserInfo :

Now I can use my return value and test to see if the value returned is of type HTTP or HTTPS.

if($return -like 'http*')
{
    write-output 'This is a http or https address'
}

This is a http or https address

You could write several other test’s to figure out what Scheme of uri the user passed. I’ll leave that up to your scripting.  Here is a link to the rest of the uri Schemes

 

 

I hope this helped someone

Until then keep scripting

Thom

Using PowerShell Class to Deploy Zip files

Recently I have been working with @developermj on a class that he wrote for deploying code to a server from a zip file.  This blog article is about how that code works.

To Start this off we need to gain access to the dot net classes that have the features for zipping and unzipping files in them:

System.IO.Compression & System.IO.Compression.FileSystem

These will get added with two statements using and Add-type

#requires -version 5.0
using namespace System.IO
using namespace System.IO.Compression
param(
 [Parameter(Mandatory=$true)][string]$sourceZip, 
 [Parameter(Mandatory=$true)][string]$destPath
)

add-type -assemblyname 'System.IO.Compression'
add-type -assemblyname 'System.IO.Compression.FileSystem'

Then we’ll build the first part of our utility which is our function to deploy the files. This function is where all the magic is:

function Deploy-Files {
 param(
 [ValidateNotNullOrEmpty()][FileInfo]$sourceZip,
 [ValidateNotNullOrEmpty()][DirectoryInfo]$destFolder
 )
 if (-not $sourceZip.Exists) {
 throw "Zip $($sourceZip.Name) does not exist"
 }
 [ZipArchive]$archive = [ZipFile]::Open($sourceZip, "Read")
 [DeployFile[]]$files = $archive.Entries | where-object {$_.Length -gt 0} `
| %{[ArchiveFile]::new($_)}
 if ($files.Length -eq 0) {
 Write-Information "No files to copy"
 }
 $hasWritten = $false
 foreach ($file in $files) {
 [FileInfo]$destFile = "$destFolder$($file.GetName())"
 $copied = $file.TryCopy($destFile)
 if ($copied) { $hasWritten = $true }
 }
 Write-Information "Done"
 if (-not $hasWritten) {
 Write-Information "...Nothing copied"
 }
}

Since the incoming object is of type Fileinfo we can find out if the file exists with this statement: if (-not $sourceZip.Exists) . If the sourcezip exists then we progress on through our function. Else we throw an exception.

Since we’ve imported the dot net classes for filecompression we now have an available type we can cast our $archive variable to [ZipArchive]. Since ZipArchive requires a stream we can open the zip file with the ZipFile class and stream it to the ZipArchive object.

Now that we have the entire contents for the archive in a variable $archive we can use apply our class to the variable.  Below is what the value of my $archive looks like.

[DBG]: PS ps:\>> $archive

Entries Mode
------- ----
{Code/, Code/Lib/, Code/Lib/ICSharpCode.SharpZipLib.dll, Code/Mindscape.Samples.Powershell.ZipProvider.csproj...} Read

[DBG]: PS ps:\>> $archive.entries.count
11

The next line in the code is where we’ll start using the Class we’ve defined in our script.

[DeployFile[]]$files = $archive.Entries `
| where-object {$_.Length -gt 0} | %{[ArchiveFile]::new($_)}

Since we are creating a new object of type [deployFile[]] Powershell will see this and instantiate a new object from our Class.  In the example above we are taking each archive entry and creating a new [ArchiveFile]. If we follow the code through this loop we’ll find the first data element that’s length is greater than 0 will be defined as a [Archivefile].

class ArchiveFile : DeployFile {
 hidden [ZipArchiveEntry]$entry

 ArchiveFile([ZipArchiveEntry]$entry) {
 $this.entry = $entry
 }

 [DateTime] GetModifiedDate() {
 return $this.entry.LastWriteTime.UtcDateTime
 }

 [void] Copy([FileInfo]$file) {
 [ZipFileExtensions]::ExtractToFile($this.entry, $file.FullName, $true)
 }

 [string] GetName() {
 return "\$($this.entry.FullName)"
 }
}

As you can see from the declaration for this class [ArchiveFile] inherits the [DeployFile] class. PowerShell will hit the constructor that matches what was passed to the class.   We passed a [ZipArchiveEntry]

Since this is now defined a new object it inherits all the methods that are declared in the class for this object type.  This object type has The following methods defined:

GetModifiedDate, Copy, GetName

It then inherits from the [DeployFile] from this inheritance it gets the following methods:

ShouldCopy, Copy, TryCopy, ToString

ArchiveFile([ZipArchiveEntry]$entry) {
 $this.entry = $entry
 }

If we continue to loop through each item in our intial $archive variable we’ll notice that we end up with a new Variable of type [DeployFile]. This $files variable is now of that type if we pipe the variable to get member we’ll see that we have a class name of [ArchiveFile]. if we look at the members of the $files of the array we’ll see the [Archivefile] class and the methods that were inherited from the other class [DeployFiles].

DBG]: PS ps:\>> $files[0] | gm

 TypeName: ArchiveFile

Name MemberType Definition 
---- ---------- ---------- 
Copy Method void Copy(System.IO.FileInfo file) 
Equals Method bool Equals(System.Object obj) 
GetHashCode Method int GetHashCode() 
GetModifiedDate Method datetime GetModifiedDate() 
GetName Method string GetName() 
GetType Method type GetType() 
ShouldCopy Method bool ShouldCopy(System.IO.FileInfo file)
ToString Method string ToString() 
TryCopy Method bool TryCopy(System.IO.FileInfo file)

Now that we have our class we can move onto deploying these files to the intended target. Which is what this next line of code does.

 foreach ($file in $files) {
 [FileInfo]$destFile = "$destFolder$($file.GetName())"
 $copied = $file.TryCopy($destFile)
 if ($copied) { $hasWritten = $true }
 }

The Foreach loops through each file and gets the destingation location plus the name of the file by calling the classes method getname().

 [DBG]: PS ps:\>> $file
\Code/Lib/ICSharpCode.SharpZipLib.dll

[DBG]: PS ps:\>> $file.getname()
\Code/Lib/ICSharpCode.SharpZipLib.dll

Now that we have a [Fileinfo] object we can now call the TryCopy method on our $file.  TryCopy Expects a type of [Fileinfo]

$copied = $file.TryCopy($destFile)

Which takes us to our class for file into it’s method TryCopy

  [bool] TryCopy([FileInfo]$file) {
 if ($this.ShouldCopy($file)) {
 [DeployFile]::CreateFolderIfNeeded($file)
 Write-Verbose "Copying to $($file.Name)"
 $this.Copy($file)
 return $true
 }

The first this is we are going to test to see if we should copy this file with the should Copy method on the same object ($this).

  [bool] ShouldCopy([FileInfo]$file) {
 if (-not $file.Exists) {
 return $true
 }

 if ($this.GetModifiedDate() -gt $file.LastWriteTimeUtc) {
 return $true
 }

 return $false
 }

This function will check to see if the file doesn’t exist with -not $file.exists. Then it checks to see what the modified date is.  if the Modified date is greater than the files last writetime in UTC. Then we are going to return true.  Which means that this file is newer and should be copied. Hence the function name should copy.  If both those tests fail then we’ll return false because the file exists and its timestamp is less than the lastwritetimeutc.

Now we return back to the TryCopy. Provided the return results of the try copy is true we’ll next check to see if we need to create a directory through a call to the class [DeployFile]::CreateFolderifNeeded([fileinfo]). This function is part of the deployfile class and will create a folder if it isn’t present for the file in question.

Now that the folder is created.  We can now call the copy function from the $file object.

This will copy the file to the destination filename based on the $file object.

Note:

I haven’t been able to get this script to run on it’s own without writing a wrapper script to then call this one.  I’ve posted an article about this on Powershell.org.

https://powershell.org/forums/topic/system-io-compression-in-powershell-class/

Here is what I have in my wrapper Script:

#requires -version 5.0
using namespace System.IO
using namespace System.IO.Compression
param(
 [Parameter(Mandatory=$true)][string]$sourceZip, 
 [Parameter(Mandatory=$true)][string]$destPath
)
add-type -assemblyname 'System.IO.Compression'
add-type -assemblyname 'System.IO.Compression.FileSystem'
& .\copy-code.ps1 -sourceZip $sourceZip -destpath $destpath

 

 

I hope this helps someone.

Until then keep scripting

Thom

Full copy of the script is in this Gist:

Updating Azure Alert Email

We have a number of Email’s setup for alerting that need to be changed. Rather than go to each alert and update their properties I chose to update each available alert in my subscriptions using PowerShell.  This post is about how I did that.

I will assume for the purposes of this post that you already are aware of the means to connect to Azure. If you aren’t familiar with that process see the article posted here.

The first thing I needed to figure out is how do I get my already configured alerts.  I chose to use the Cmdlet Get-AzureRmResource.  I then took the results of my query to find all the alerts in the current subscription context:

$alerts = get-AzureRmResource `
 | Where-Object{$_.resourcetype -like '*alert*'}

Now that I have all my resources that look like an alert I can now iterate through each and find the properties of each alert Get-AzureRmAlertRule:

foreach($alert in $alerts)
get-azureRmalertRule -Resourcegroup `
$alert.ResourceGroupName -Name $alert.Name
}
Properties : Microsoft.Azure.Management.Insights.Models.Rule
Tags : {[$type,
 Microsoft.WindowsAzure.Management.Common.Storage.CasePreservedDictionary,
 Microsoft.WindowsAzure.Management.Common.Storage], [hidden-link:/subscripti
 ons/xxx/resourceGroups/AzureTesting/provid
 ers/Microsoft.Web/serverfarms/EasyAuth, Resource]}
Id : /subscriptions/xxxx/resourceGroups/AzureTes
 ting/providers/microsoft.insights/alertrules/LongHttpQueue
Location : East US
Name : LongHttpQueue 

After some testing of this particular function I discovered that the extra switch of -DetailedOutput provided the detail I was looking for.

foreach($alert in $alerts)
get-azureRmalertRule -Resourcegroup `
$alert.ResourceGroupName -Name $alert.Name
}
Properties :
 Name: : LongHttpQueue EasyAuth
 Condition :
 DataSource :
 MetricName : HttpQueueLength
 ResourceId : /subscriptions/xxxxxxxx-xxxxxx-xxxxx-xxxxx-xxxxxxxxxx/re
 sourceGroups/AzureTesting/providers/Microsoft.Web/serverfarms/EasyAuth
 Operator : GreaterThan
 Threshold : 100
 Aggregation operator: Total
 Window size : 00:05:00
 Description : The HTTP queue for the instances of EasyAuth has a
 large number of pending requests.
 Status : Disabled
 Actions :
 SendToServiceOwners : True
 E-mails : 

Tags :
 $type :
 Microsoft.WindowsAzure.Management.Common.Storage.CasePreservedDictionary,
 Microsoft.WindowsAzure.Management.Common.Storage
 hidden-link:/subscriptions/xxxxxxxx-xxxxxx-xxxxx-xxxxx-xxxxxxxxxx/resourceGro
 ups/AzureTesting/providers/Microsoft.Web/serverfarms/EasyAuth:
 Resource
Id : /subscriptions/xxxxxxxx-xxxxxx-xxxxx-xxxxx-xxxxxxxxxx/resourceGroups/AzureTes
 ting/providers/microsoft.insights/alertrules/LongHttpQueue EasyAuth
Location : East US
Name : LongHttpQueue EasyAuth

Now I need to find out what the Email property was for this object I retrieved from the Get-AzureRmAlertRule.   If I inspect the object a little closer I find that there is a  sub Object called properties and then under that object I find another object that my Emails are associated to.   What I discovered through trial and error was that the Actions property was an array of settings.  The first item if set is the customEmails and whether or not an email should be sent upon alert activation (shown below).

PS PS:\azure> $t = get-azureRmalertRule -Resourcegroup `
'Azure Testing' -Name 'LongHttpQueue EasyAuth'
PS PS:\azure> $t.properties.Actions[0]

CustomEmails SendToServiceOwners
------------ -------------------
{} True

So this means if there are no emails set then the Array Count is Zero.  The other item that happens to be in the Action Object is whether or not a WebHook is set or not.  This can be seen by looking at the serviceuri in the actions object as shown below:

PS PS:\azure> $t =(get-azurermalertrule -name 'CPUHigh Dev' `
 -resourcegroup Dev -DetailedOutput)

PS PS:\azure> $t.properties.Actions | fl

Properties : {[$type, Microsoft.WindowsAzure.Management.Common.Storage.CasePreservedDict
 ionary`1[[System.String, mscorlib]],
 Microsoft.WindowsAzure.Management.Common.Storage]}
ServiceUri : https://s1events.azure-automation.net/webhooks?token=xxxx

CustomEmails : {email@email.com, email2@email.com}
SendToServiceOwners : True

On to how to change the email.  According to the blog article from Microsoft, you can only delete or add alert rules. I found this to be partially true.  In that if I already have an alert I can update it by just calling Add-AzurermMetricAlertRule.

Now to add email Items to the Add-AzurermMetricAlertRule you can do it two different ways:

The first way is use the Cmdlet Microsoft provides which creates an object of the precise thing you want and in the format the Add-AzurermMetricAlertRule expects:

$email = 'youremail@youremailServer.com'
$newEmailObj = new-azurermAlertRuleEmail -CustomEmails $email
add-azurermmetricalertrule -name $Name `
 -Location $Location -ResourceGroup $ResourceGroupName `
-operator ($alert.operator) -Threshold ($alert.threshold)`
 -TargetResourceId $alert.DataSource.ResourceUri`
 -MetricName $alert.DataSource.MetricName`
 -WindowSize $alert.WindowsSize`
 -TimeAggregationOperator $alert.TimeAggregation`
 -Description $targetResourceId.properties.Description`
 -Actions $newEmailObj

Or the other way you can do it is when you have the return result of alert already in an object you can use the .Add of the object to add an email to it.

$email = 'youremail@youremailServer.com'
$targetResourceId = (get-azurermalertrule -ResourceGroup `
$ResourceGroupName -Name $Name -DetailedOutput)
$actions = $targetResourceId.properties.Actions
 if($actions.count -eq 0)
 {
 $targetresourceId.properties.actions.add((`
new-azurermAlertRuleEmail -CustomEmails $email ))
 $targetresourceid.properties.actions`
[($targetresourceid.properties.actions.count -1)].SendToServiceOwners = $true
 $addedEmail = $true
 }
 else
 {
 $emailActions = $targetResourceId.properties.Actions.Count -1
 $emails = $actions[$emailActions].customemails
 if($emails -notcontains $email)
 {
 $targetResourceId.properties.actions[$emailActions].customemails.add($email)
 $addedEmail = $true
 }
 }

I chose to use the .add method as I’m doing this over and over again and it was to my advantage to use that method. Only when I have a case of there not being an alert ($actions.count -eq 0) do I use the New-AzureRmAlertRuleEmail.

I assume if there isn’t at least one item in $actions then it’s safe to add the email.

$emailActions = $targetResourceId.properties.Actions.Count -1
 $emails = $actions[$emailActions].customemails

I use $addedEmail to tell my function whether or not I need to add the email. This is because the the function will run these steps in a ForEach loop.

Now that I have a means to get the alert email and update it doing the converse is a matter of  changing the .Add method to a .Remove method and Bingo I have a add and a delete.  To see the entire script in action see this Gist. PS. I’m still working on the help. Will update the GIST as it is updated:

I hope this helps someone out.

Until then keep scripting

thom

Deploying CRM with TFS 2015 tasks and PowerShell

Recently I was asked to put together some automation that would Deploy’s a  CRM solution with Solution Packager.  This Blog post is about how I did that.

I started with the documentation on Solution Packager.   What I found was that I could write a simple script that takes the source location for the Solution and packages it into a zip.  Here is what that run line looks like:

.\SolutionPackager.exe /action:Pack  `
/folder:'C:\CRM\CRM Solutions\default' /zipfile:c:\temp\myzip2.zip  `
/packagetype:unmanaged

So now to put a try catch around it and some other House keeping.  Since I’m going to call this from TFS I need to make certain that I have a way to allow for -debug and the other standard switches with a Powershell script so I’ll include [CmdletBinding()]. The full script is below:

[CmdletBinding()]
param
(
 [String]
 [Parameter(Mandatory)]
 $SourceFolder,
 [String]
 $zipfile = 'Crm'
)
$ErrorActionPreference = 'Stop'
$sourcefolder = (Get-Item $SourceFolder).FullName
"ZipfFileName: $sourcefolder\$zipfile.zip"
 Try
 {
 & .\SolutionPackager.exe /action:Pack /folder:$SourceFolder `
"/zipfile:$sourcefolder\$zipfile.zip" /packagetype:unmanaged 
 }
 catch
 { 
 Write-Error $_.Exception.Message
 exit 1
 }
 Get-ChildItem -Path "$sourcefolder\$zipfile.zip" -Verbose

Now that I have the Solution zipped up from what the developer checks into source I need a means to deploy it.   The Powershell Developer in me wanted to write a script of my own, I found that someone had already written this capability and all I needed to do was to add it to TFS.    Here is what I found:

A Developer by the name of Chaminda Chandrasekara created a Plugin to TFS (task) that does a solution import and activation of workflows.

Now with that in mind I added Chaminda’s code to my release process in TFS and then added the script that I created to my build process in TFS for the full solution.   I did need to create a task for my script shown above.  This was done by following the steps found here.

2016-11-03-09_28_25-microsoft-team-foundation-server

My build process steps consist of two steps.

Step 1 is to create the package. 2016-11-03-09_31_18-microsoft-team-foundation-server

Step 2 is to copy the artifacts to a Staging directory

2016-11-03-09_29_54-microsoft-team-foundation-server

Now onto the release process which also consists of two steps.

Step 1 is to do the solution import:

2016-11-03-09_41_05-release

In this setup I specify the name of the Zip file from the earlier build.  I have TFS variables that are defined in my Release steps identified by:

$(CRMUrl),$(CRMOrg), $(CRMSolution) etc.

To see how these are implemented this site has a good write-up.

Step 2 the last step is to publish the workflow:

2016-11-03-09_45_42-crmservices-release-visual-studio-team-services

That does it except for all the rest of the setup work you must do to allow it to push through all your environments.

I hope this helps someone

 

Until then keep scripting

thom

Quick hit – Server and Site Ips

Recently I needed to quickly get server IPS and Website addresses.  So I put together a couple line script to do this.  This post is about how that works:

Since I need to get these remotely I’m going to use a PowerShell Session . So I’ll first create my variable to hold the servers i need:

$servers = 'server1','server2','server3','server4'

Now that i have a variable with servers in it I can send this to new-pssesion and use the variable I setup above.

$session = new-pssession -computername $servers

My $sesion variable contains a session to each server that I want to Invoke-Command on.  So now I need to come up with a way to get the IPS for every site I have setup in IIS and all the ips for the server I’m calling. If I do this locally I can use this script

import-module webadministration ;(Get-Website) | Select-Object name`
 ,@{Name='bindings' expression= `
{($_.bindings.collection.bindingInformation -replace ':\d+','').trim(':') }}}

I’m taking the values from Get-website  and selecting the name bindings and alson the bindings.collection binding information and removing the Port from the binding number. Return results look like this:

name : MySite
bindings : 10.10.10.39

Now all I need is to find out how to get my ip address. I can do this by using the command Get-Netipaddress if I enclose the function I can get at the property:

PS PS:\iis> (get-netipaddress).IPAddress
::1
10.10.10.1
127.0.0.1

Now to put it together in a single script:

 $servers = 'server1','server2','server3','server4'
 $sess = new-pssession -ComputerName $servers -Credential $admCredentials
 invoke-command -Session $sess {import-module webadministration ;(Get-Website) | Select-Object name ,@{Name='bindings' 
 expression={($_.bindings.collection.bindingInformation -replace ':\d+','').trim(':') }}}
 invoke-command -session $sess { $env:computername;(get-netipaddress).IPAddress}

name : website1
bindings : 10.10.10.48
PSComputerName : server1
RunspaceId : 91f9523f-58df-49d5-a0b1-064101822aae

name : websiteapi
bindings : 10.10.10.49
PSComputerName : server1
RunspaceId : 91f9523f-58df-49d5-a0b1-064101822aae

name : website1
bindings : 10.10.10.50
PSComputerName : server2
RunspaceId : 3cd44533-d827-4a60-bdcc-6c91e77d96b9

name : websiteapi
bindings : 10.10.10.51
PSComputerName : server2
RunspaceId : a90c36ca-630e-42a8-abcd-069e8cec5360

server1
server2
::1
::1
::1
10.10.10.48
10.10.10.49
10.10.10.40
10.10.10.51

I hope this helps someone out. .

 

 

Until then keep Scripting

 

thom

Create SSRS Data Source

Continuing on with my earlier discussion on SSRS and testing data sources.  I’ve come up with a means to create a data source.  This article will demonstrate how I did that.

In testing the data source I had to create a proxy to the SSRS server again we’ll need to do the same thing so we can get to the Create method for the Data source.

$reportWebService = 'http://yourReportServer/ReportServer/ReportService2010.asmx'
$credentials = Get-Credential
$reportproxy = New-WebServiceProxy -uri $reportWebService -Credential $credentials

The reportWebService is a link to my Webservice on my ssrs instance which when proxied will allow me to get at all the methods and properties of this class Reportservice2010.asmx

The method we’ll be using for this discussion is ReportingService2010.CreateDataSource.

This method requires three variables.

[string] DataSource,
[string] Parent,
[boolean]Overwrite,
[DataSourceDefinition] Definition,
[Property[] ]Properties

The Datasource is a String = The name for the data source including the file name and, in SharePoint mode, the extension (.rsds).

Parent = The fully qualified URL for the parent folder that will contain the data source.  In My case I’m going to use /ThomTest

Where the location from root on the SSRS server is the folder Named ThomTest.

Overwrite = This tells the function if it finds it to overwrite what is there.

DataSourceDefition = This is a DataSourceDefinition class that contains the values for the DataSource. This includes things like:

ConnectStringCredentialRetrieval, Enabled, EnabledSpecified ImpersonateUserImpersonateUserSpecifiedPasswordPrompt, UserName, WindowsCredentials

For each of the above properties here is what I’ve been able to discover so far for where they are used:

2016-08-03 16_08_38-Clipboard

[Property[] ]Properties =  ReportService2010.Property[]– an array of properties that are nearly the same thing as the data source definition. So some of the same data collected to create the data source definition is used in this property Array collection.

The tough part of this creation of the datasource was getting the values passed into the PowerShell function to be accepted by the proxied method.  In order to do this I stumbled on this great article on StackOverflow. This allowed me to get at the classes from the proxied webservice via calls similar to the one below:

$ssrsproxy = New-SSRSProxy -reportWebService $reportWebService `
-Credentials $credentials
 $proxyNameSpace = $ssrsproxy.gettype().Namespace

So in order to get to the class I need for the DataSourceDefinition .  All i need to do is take the ProxyName space and append it to the proxied name space.

$proxyNameSpace = $ssrsproxy.gettype().Namespace 
$datasourceDef = New-Object("$proxyNameSpace.DataSourceDefinition")

Now my $datasourceDef is a DatasourceDefinition object which contains the properties I showed above.  Since it is now in an object all I need to do now to set the items I need is to refer to them via . notation:

$datasourceDef = New-Object("$proxyNameSpace.DataSourceDefinition")
PS PS:\> $datasourceDef


Extension : 
ConnectString : 
UseOriginalConnectString : False
OriginalConnectStringExpressionBased : False
CredentialRetrieval : Prompt
WindowsCredentials : False
ImpersonateUser : False
ImpersonateUserSpecified : False
Prompt : 
UserName : 
Password : 
Enabled : False
EnabledSpecified : False

PS PS:\> $datasourcedef.Connectstring = 'MyConnectionSTring'

Ok now the fourth parameter is the tough one this is where I had to get help from @Poshoholic on how to get a hashtable for the values into a Array of properties that the create will accept.

Here is what the Hashtable looks like:

 PS PS:\> $datasourceDefHash = @{
 'ConnectString' = $connectString; 'UserName' = $username; `
'Password' = $password; 'WindowsCredentials' = $windowsCredentials; `
'Enabled' = $enabled; 'Extension' = $Extension; `
'ImpersonateUser' = $ImpersonateUser; 'ImpersonateUserSpecified' = $true; `
'CredentialRetrieval' = $credentialRetrieval
 }

My understanding of what is needed is a property Collection so I named my variable a property collection:

 $propertyCollection = $datasourceDefHash.Keys.foreach`
{ @{ Name = $_; Value = $dataSourceDefHash[$_] }`
 -as "${proxyNamespace}.property" }

The magic here is where we are iterating through our keys and then casting each name and value to the $proxynamespace.property which is our ReportService2010.Property[] array.  @Poshoholic informed that because the name of the class is dynamic we have to use the -as key word to allow it to be ‘cast’ into the property we need.  Wow I’m glad he helped me or I’d have been here a very long time.

Now to put it all together. I originally wrote this function to all for continuous deployments and creation of data sources. The only value I really wanted to use was the Storing of the username and password (Credentials stored securely in the report server). In addition I need the checkbox for this option checked ( Use as Windows credentials when connecting to the data source).  with the Username and password entered upon calling the function.

So here is what my param block looks like:

  param
 (
 [Parameter(Mandatory = $false)]
 [string]$DataSourceName,
 [string]$path,
 [Parameter(Mandatory = $false)]
 [uri]$reportWebService,
 [string]$connectString,
 [string]$password,
 [string]$username,
 [ValidateSet('SQL','SQLAZURE','OLEDB','OLEDB-MD','ORACLE','ODBC','XML',`
'SHAREPOINTLIST','SAPBW','ESSBASE','Report Server FileShare','NULL'`
,'WORDOPENXML','WORD','IMAGE','RPL','EXCELOPENXML','EXCEL','MHTML',`
'HTML4.0','RGDI','PDF','ATOM','CSV','NULL','XML')]
 [string]$Extension = 'SQL',
 [boolean]$windowsCredentials = $false,
 [boolean]$enabled = $true,
 [boolean]$ImpersonateUser = $false ,
 [ValidateSet('None', 'Prompt', 'Integrated', 'Store')]
 [string]$credentialRetrieval = 'Store',
 [System.Management.Automation.PSCredential]$credentials
 )

Now that I have my user passing in their credentials and the items I need with the default values I can now call some of the methods and Items I described above:

  #https://msdn.microsoft.com/en-us/library/reportservice2010.reportingservice2010.createdatasource.aspx
 $ssrsproxy = New-SSRSProxy -reportWebService $reportWebService -Credentials $credentials
 $proxyNameSpace = $ssrsproxy.gettype().Namespace
 #https://msdn.microsoft.com/en-us/library/reportservice2010.datasourcedefinition.aspx
 $datasourceDef = New-Object("$proxyNameSpace.DataSourceDefinition") #definition is needed because the create expects and object with some of the properties set.
 #$dataSourceProps = New-Object ("$proxyNameSpace.property")
 #$ssrsExtensions = ($ssrsproxy.ListExtensions('All')).name `
 #-join "','" for creating the set statement for extensions.
 #for some reason you have to set the extension and datasouce `
 in the definition before attempting to create. 
 $datasourceDef.connectstring = $connectString
 $datasourcedef.Extension = $Extension
 if ($credentialRetrieval -eq 'Store')
 {
 $datasourceDef.WindowsCredentials = $WindowsCredentials
 $datasourceDef.password = $password
 $datasourceDef.CredentialRetrieval = $credentialRetrieval
 $datasourceDef.username = $username
 }
 $datasourceDefHash = @{
 'ConnectString' = $connectString; 'UserName' = $username; 'Password' = $password; 'WindowsCredentials' = $windowsCredentials; 'Enabled' = $enabled; 'Extension' = $Extension; 'ImpersonateUser' = $ImpersonateUser; 'ImpersonateUserSpecified' = $true; 'CredentialRetrieval' = $credentialRetrieval
 }
 #convert the hashtable to an array of proxynamespace property items. https://msdn.microsoft.com/en-us/library/reportservice2010.property.aspx
 $propertyCollection = $datasourceDefHash.Keys.foreach`
{ @{ Name = $_; Value = $dataSourceDefHash[$_] } -as "${proxyNamespace}.property" }
 try
 {
 $ssrsproxy.CreateDataSource($DataSourceName, $path, $true, $datasourceDef, $propertyCollection)
 }
 catch
 {
 "Error was $_"
 $line = $_.InvocationInfo.ScriptLineNumber
 "Error was in Line $line"
 }

The actual piece that is doing the creation of the data source is this line

$ssrsproxy.CreateDataSource($DataSourceName, $path, $true, $datasourceDef, `
$propertyCollection)

The script in its full form is below:

<#
 .SYNOPSIS
 Creates an SSRS data source
 
 .DESCRIPTION
 This script creates a datasource from the PowerShell prompt.
 
 .PARAMETER DataSourceName
 A description of the DataSourceName parameter.
 
 .PARAMETER path
 Path to where the datasource will be created. This should be the root of where the source is created.
 /report/report data source will be created at the second report value.
 
 .PARAMETER reportWebService
 URI to the location of the reportingService 2010 asmx page.
 
 .PARAMETER connectString
 This is the connection string that you use to connect to your database.
 
 .PARAMETER password
 Password to use if you are storing the credentials on the SQL server.
 
 .PARAMETER UserName
 Username to use for the connection if you are storing the credentiasl on the SQL Server.
 
 .PARAMETER Extension
 The Extension parameter is described as the Data Source Type in the new data source window in SSRS. Depending on your installation you may or may not have the items specified in the set statement for this function:
 'SQL' = SQL Server Connection
 'SQLAZURE' = SQL Azure Connection
 'OLEDB' = OLEDB connection 
 other possible connections include: 'OLEDB-MD','ORACLE','ODBC','XML','SHAREPOINTLIST','SAPBW','ESSBASE','Report Server FileShare','NULL','WORDOPENXML','WORD','IMAGE','RPL','EXCELOPENXML','EXCEL','MHTML','HTML4.0','RGDI','PDF','ATOM','CSV','NULL','XML'
 
 .PARAMETER windowsCredentials
 windowsCredentials = When using 'Store' with credential retrieval this sets the data source to 'Use as Windows credentials when connecting to the data source' 
 
 .PARAMETER enabled
 This Tells SSRS to enable the data source.
 
 .PARAMETER ImpersonateUser
 SEt this to true if you want to use the 'Impersonate the authenticated user after a connection has been made to the data source'.
 
 .PARAMETER credentialRetrieval
 CredentialRetrieval = one of four values:
 None = Credentials are not required
 Store = Credentials stored securely in the report server
 requires setting the username and password and optional params are impersonate and windowsCredentials
 Prompt = Credentials supplied by the user running the report
 Integrated = Windows integrated security
 
 .PARAMETER Credentials
 The credentials parameter is required to access the web service. They should be [System.Management.Automation.PSCredential] type
 
 .PARAMETER WebService
 This is the url to the Webservice which allows for creation of 
 
 .EXAMPLE
 PS C:\> $reportWebService = 'http://mySSRSServer//reportserver/reportservice2010.asmx'
 PS C:\> New-SSRSDataSource -DataSourceName 'ThomTest' -path '/ThomTest' -reportWebService $ReportWebService -connectString 'Data Source=servername;Initial Catalog=DB;Integrated Security=True' -username 'domain\user' -password 'password' -Extension SQL -enabled $true -windowsCredentials $true -credentialRetrieval Store -impersonateuser $true -credentials $credentials
 
 .NOTES
 Additional information about the function.
#>
function New-SSRSDataSource
{
 [CmdletBinding()]
 param
 (
 [Parameter(Mandatory = $false)]
 [string]$DataSourceName,
 [string]$path,
 [Parameter(Mandatory = $false)]
 [uri]$reportWebService,
 [string]$connectString,
 [string]$password,
 [string]$username,
 [ValidateSet('SQL','SQLAZURE','OLEDB','OLEDB-MD','ORACLE','ODBC','XML','SHAREPOINTLIST','SAPBW','ESSBASE','Report Server FileShare','NULL','WORDOPENXML','WORD','IMAGE','RPL','EXCELOPENXML','EXCEL','MHTML','HTML4.0','RGDI','PDF','ATOM','CSV','NULL','XML')]
 [string]$Extension = 'SQL',
 [boolean]$windowsCredentials = $false,
 [boolean]$enabled = $true,
 [boolean]$ImpersonateUser = $false ,
 [ValidateSet('None', 'Prompt', 'Integrated', 'Store')]
 [string]$credentialRetrieval = 'Store',
 [System.Management.Automation.PSCredential]$credentials
 )
 #https://msdn.microsoft.com/en-us/library/reportservice2010.reportingservice2010.createdatasource.aspx
 $ssrsproxy = New-SSRSProxy -reportWebService $reportWebService -Credentials $credentials
 $proxyNameSpace = $ssrsproxy.gettype().Namespace
 #https://msdn.microsoft.com/en-us/library/reportservice2010.datasourcedefinition.aspx
 $datasourceDef = New-Object("$proxyNameSpace.DataSourceDefinition") #definition is needed because the create expects and object with some of the properties set.
 #$dataSourceProps = New-Object ("$proxyNameSpace.property")
 #$ssrsExtensions = ($ssrsproxy.ListExtensions('All')).name #-join "','" for creating the set statement for extensions.
 #for some reason you have to set the extension and datasouce in the definition before attempting to create. 
 $datasourceDef.connectstring = $connectString
 $datasourcedef.Extension = $Extension
 if ($credentialRetrieval -eq 'Store')
 {
 $datasourceDef.WindowsCredentials = $WindowsCredentials
 $datasourceDef.password = $password
 $datasourceDef.CredentialRetrieval = $credentialRetrieval
 $datasourceDef.username = $username
 }
 $datasourceDefHash = @{
 'ConnectString' = $connectString; 'UserName' = $username; 'Password' = $password; 'WindowsCredentials' = $windowsCredentials; 'Enabled' = $enabled; 'Extension' = $Extension; 'ImpersonateUser' = $ImpersonateUser; 'ImpersonateUserSpecified' = $true; 'CredentialRetrieval' = $credentialRetrieval
 }
 #convert the hashtable to an array of proxynamespace property items. https://msdn.microsoft.com/en-us/library/reportservice2010.property.aspx
 $propertyCollection = $datasourceDefHash.Keys.foreach{ @{ Name = $_; Value = $dataSourceDefHash[$_] } -as "${proxyNamespace}.property" }
 try
 {
 $ssrsproxy.CreateDataSource($DataSourceName, $path, $true, $datasourceDef, $propertyCollection)
 }
 catch
 {
 "Error was $_"
 $line = $_.InvocationInfo.ScriptLineNumber
 "Error was in Line $line"
 }

}

function New-SSRSProxy
{
 param
 (
 [string]$reportWebService,
 [Parameter(Mandatory = $true,
 ValueFromPipeline = $true,
 ValueFromPipelineByPropertyName = $true)]
 [System.Management.Automation.PSCredential]$Credentials
 )
 Begin
 {
 if ($reportWebService -notmatch 'asmx')
 {
 $reportWebService = "$reportWebService/ReportService2010.asmx?WSDL"
 #$reportWebServiceurl = $reportWebServiceUrl.Replace("//","/")
 }
 }
 Process
 {
 #Create Proxy
 Write-Verbose "Creating Proxy, connecting to : $reportWebService"
 $ssrsProxy = New-WebServiceProxy -Uri $reportWebService -UseDefaultCredential -ErrorAction 0
 #Test that we're connected
 $members = $ssrsProxy | get-member -ErrorAction 0
 if (!($members))
 {
 if (!$Credentials)
 {
 $Credentials = Get-Credential -Message 'Enter credentials for the SSRS web service'
 }
 Else
 {
 }
 $ssrsProxy = New-WebServiceProxy -Uri $reportWebService -Credential $Credentials
 }
 $ssrsProxy
 }
 End { }
}

I hope this helps someone

Until then keep scripting

 

Thom

Testing an SSRS DataSource

Recently I’ve been working with SSRS and adding removing Datasources and Testing them. This article describes how I got there:

The first thing I did was to figure out how I was going to connect to the SSRS website.  I found that the best way to do this was with a new-webserviceproxy. This proxy requires a uri and a credential object:

$reportWebService = 'http://yourReportServer/ReportServer/ReportService2010.asmx'
$credentials = Get-Credential
$reportproxy = New-WebServiceProxy -uri $reportWebService -Credential $credentials

The reportWebService is a link to my Webservice on my ssrs instance which when proxied will allow me to get at all the methods and properties of this class Reportservice2010.asmx

Now the next thing I had to figure out is if the value that I’ll pass for my $datasource is actually a Data source.  So after sleuthing around I found the method to be able to determine the type that the passed in text is for my datasource.  The name of the method is getitemType if I pass the value for my $datasource to this method it’ll return me what type of “thing” it is. So using the report proxy above I’m going to see what type of return I’ll get.

$datasource = '/ssrs/mydatasource'
$reportproxy.GetItemType($datasource)   
DataSource

To find the different types that are possible running the Listitemtypes() against the report proxy will give you the different types that are possible.

PS C:\> $reportproxy.listitemtypes() 
Unknown 
Folder 
Report 
Resource 
LinkedReport 
DataSource 
Model 
Site 
DataSet 
Component

AS you can see there are several different item types.  The one we want to test with is the DataSource.  Now that we know that the use passed us a Data source we can now see if we have the ability to view it.  This is done with the GetDataSourceContents method.

$validObject = $reportProxy.Getdatasourcecontents($datasource)
Extension : SQL 
ConnectString : Data Source=SomeServer;Initial Catalog=dsn1 
UseOriginalConnectString : False 
OriginalConnectStringExpressionBased : False 
CredentialRetrieval : Store 
WindowsCredentials : True 
ImpersonateUser : False 
ImpersonateUserSpecified : True 
Prompt : Type or enter a user name and password to access the data source: 
UserName : yourDomain\ServiceAccount
Password : 
Enabled : True 
EnabledSpecified : True

So now that we know we can get to the Data source now we can go through testing it. This is done by calling the following method TestConnectForItemDataSource To do this the method requires 4 parameters and this is where i spent a great deal of time trying to figure out what the right params to pass were.  Three of them are in the $Validobject variable.  The third one is a [ref] type that I had not used before.  So here is how I was able to do this:

$tempRef = $true # have to initialize a variable so it can be used as a reference
# in the next method call
 $validConnect = $reportproxy.TestConnectForItemDataSource `
($datasource, $datasource, ($validObject.username), `
($validObject.password), ([ref]$tempRef))

The first variable $datasource is our datasource. The second is the datasource again.  This tripped me up because I was thinking i’d have to parse the datasource name out of the fully qualified datasource. I tried passing the name of the data source or the full path to the datasource and both seemed to work equally well so I took the less work method datasource,datasource.

The third param is the user name. Fourth is the password which when you look at the variable in shell it appears $null. If you try to get the contents nearly every query comes back with errors on a null valued expression.  Well I carried on here and assumed that the server had the password because  I can test in the GUI and it works fine.

Now onto the fifth param. I first passed a blank variable here and I would get errors:

Argument: ‘4’ should be a System.Management.Automation.PSReference. Use [ref].

I then ran this same function with [ref]$temp and I found that it would error because it didn’t exist yet thanx to Joel Bennet I was able to figure it out

At line:1 char:1
+ $reportproxy.TestConnectForItemDataSource($datasource, $datasource, ( …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (tempRef:VariablePath) [], RuntimeException
+ FullyQualifiedErrorId : NonExistingVariableReference

So I declared the variable and then I was able to succesfully test the connection:

$validConnect = $reportproxy.TestConnectForItemDataSource`
($datasource, $datasource, ($validObject.username), `
($validObject.password), ([ref]$tempRef))
 
True

Now that I have a return of true or false I can add the test value and datasource to my $validobject variable:

 $validObject | Add-Member -type NoteProperty -Name 'Valid' -Value $validConnect
 $validObject | Add-Member -Type NoteProperty -Name 'DataSource' -Value $datasource
 [pscustomobject]$validObject

Now that is done I can now show you the entire function with all the items added above + another function to do a litte house keeping on the Passed in data source.  I have more to this set of functions on a gist script in my repo called SQLReporting.ps1  https://gist.github.com/crshnbrn66/

 

<#
 .SYNOPSIS
 Test to see if the Datasource passed is valid
 
 .DESCRIPTION
 Checks to see if the data source exists. Then will test the data source to see if it is operational
 
 .PARAMETER datasource
 This is the full path to the datasource. where in the example datasource1 is our datasource name.
 for example: \myapp\mydatasource\data sources\datasource1
 example 2: \datasource1
 
 .PARAMETER reportWebService
 A description of the reportWebService parameter.
 
 .PARAMETER Credentials
 You must pass a pscredential object
 
 .PARAMETER NoTest
 if this is passed a test will not be performed on the data source.
 
 .NOTES
 Additional information about the function.
#>
function Test-ValidDataSource
{
 [CmdletBinding()]
 param
 (
 [Parameter(Mandatory = $true,
 ValueFromPipeline = $true,
 ValueFromPipelineByPropertyName = $true)]
 [string]$datasource,
 [Parameter(Mandatory = $true,
 ValueFromPipeline = $true,
 ValueFromPipelineByPropertyName = $true,
 HelpMessage = 'Provide the full path to the DataSource')]
 [uri]$reportWebService,
 [Parameter(Mandatory = $true,
 ValueFromPipeline = $true,
 ValueFromPipelineByPropertyName = $true,
 HelpMessage = 'You must pass a pscredential object')]
 [System.Management.Automation.PSCredential]$Credentials,
 [switch]$NoTest
 )
 
 $datasource = Normalize-SSRSFolder $datasource
 try
 {
 $reportProxy = new-webserviceproxy -uri $reportWebService -Credential $credentials
 $reportType = $reportProxy.getitemtype($datasource)
 Write-Debug $reportType
 if ($reportType -eq 'DataSource')
 {
 try
 {
 $validObject = $reportProxy.Getdatasourcecontents($datasource)
 if ($validObject.gettype().name -eq 'DataSourceDefinitionOrReference' -or 'DataSourceDefinition')
 {
 
 if ($NoTest)
 {
 $validConnect = $false
 }
 else
 {
 $tempRef = $true # have to initialize a variable so it can be used as a reference in the next method call
 $validConnect = $reportproxy.TestConnectForItemDataSource($datasource, $datasource, ($validObject.username), ($validObject.password), ([ref]$tempRef))
 }
 $validObject | Add-Member -type NoteProperty -Name 'Valid' -Value $validConnect
 $validObject | Add-Member -Type NoteProperty -Name 'DataSource' -Value $datasource
 [pscustomobject]$validObject
 }
 else
 {
 $invalid = "invalidobject or permssion"
 [pscustomobject]@{
 'Extension' = $invalid
 'ConnectString' = $invalid
 'UseOriginalConnectString' = $false
 'OriginalConnectStringExpressionBased' = $false
 'CredentialRetrieval' = $invalid
 'ImpersonateUserSpecified' = $false
 'WindowsCredentials' = $false
 'Prompt' = $invalid
 'UserName' = $invalid
 'Password' = $invalid
 'Enabled' = $false
 'EnabledSpecified' = $false
 'Valid' = $false
 }
 }
 }
 catch
 {
 "Error was $_"
 $line = $_.InvocationInfo.ScriptLineNumber
 "Error was in Line $line"
 }
 
 
 }
 }
 catch
 {
 "Error was $_"
 $line = $_.InvocationInfo.ScriptLineNumber
 "Error was in Line $line"
 }
}

function Normalize-SSRSFolder
{
 param
 (
 [string]$Folder
 )
 
 if (-not $Folder.StartsWith('/'))
 {
 $Folder = '/' + $Folder
 }
 elseif ($Folder -match '//')
 {
 $Folder = $Folder.replace('//','/')
 }
 
 return $Folder
}

 

I hope this helps someone

Until then keep scripting

 

Thom

Capture Web Page / HTML to JPG

I’m a member of an American Legion and as such I’ve been working with them on displaying images on screens for schedules and such.  So for a while I’ve been using various programs to capture an image from a website and save it to a jpg file.  So that got me to thinking there has to be a way to do this in Script.  So this article is about how I did just that.

First thing is I needed to find an easy way to bring in a webpage / html into memory for conversion to a Jpg.  After doing much searching I found this nice handy dandy module called NReco. Now that I have a Dll that I can import I can add this to my PowerShell script by doing an add-type:

Add-Type -Path ".\nreco\NReco.ImageGenerator.dll"

I chose for simplicity to put the dll in a sub folder where my script resides.  Now that I have the dll imported now on to seeing what the DLL can do for me.  According to the article the dll will convert an html to a jpg in one line of code.  So what  I chose to do is take advantage of the Invoke-WebRequest and just point it to www.powershell.org to see if it’d save a page for it.

$html = invoke-webrequest -uri 'https://powershell.org/forums/'
$h2image = new-object NReco.ImageGenerator.HtmlToImageConverter
$imageFormat = [NReco.ImageGenerator.ImageFormat]::Jpeg
$jpeg = $h2image.generateImage($html, $imageformat)
$dataStream = New-Object System.IO.MemoryStream(,$jpeg)
$img = [System.Drawing.Image]::FromStream($dataStream)
$img.save('c:\temp\image.jpg')

So the $h2image this is an object of the dll we pulled in which allows us to convert the webpage to a Jpg. Depending on the size of the page it may take a little while for this function to return.

$h2image = new-object NReco.ImageGenerator.HtmlToImageConverter

The next line of code the image format this tells the Dll what type of file we want to save it to. Through intellisense in the ISE you’ll notice there are 3 types included in this Enumeration.

nreco
For what I needed I chose JPG.

Now that I have the type of file and the type added I can now stream this webpage into memory:

$dataStream = New-Object System.IO.MemoryStream(,$jpeg)

This one took me a while to figure out if it hadn’t been for this article I may have never figured it out: http://piers7.blogspot.com/2010/03/3-powershell-array-gotchas.html

solution for getting the array to be streamed is in this tidbit:

Cup(Of T): 3 PowerShell Array Gotchas

The (somewhat counter-intuitive) solution here is to wrap the array – in an array. This is easily done using the ‘comma’ syntax (a comma before any variable creates a length-1 array containing the variable):

PS > $bytes = 0x1,0x2,0x3,0x4,0x5,0x6
PS > $stream = new-object System.IO.MemoryStream (,$bytes)
PS > $stream.length
6

Now that I have the html in a streamed variable I can now write this to a file using another dot net Class System.drawing.image 

$img = [System.Drawing.Image]::FromStream($dataStream)
$img.save('c:\temp\image.jpg')

And walla my web page is saved as a JPG.

image2

Full script:

Add-Type -Path ".\nreco\NReco.ImageGenerator.dll" 
$html = invoke-webrequest -uri 'https://powershell.org/forums/'
$h2image = new-object NReco.ImageGenerator.HtmlToImageConverter
$imageFormat = [NReco.ImageGenerator.ImageFormat]::Jpeg
$jpeg = $h2image.generateImage($html, $imageformat)
$dataStream = New-Object System.IO.MemoryStream(,$jpeg)
$img = [System.Drawing.Image]::FromStream($dataStream)
$img.save('c:\temp\image.jpg')

PowerShell Posse // Thom Schumacher – PowerShellPosse / DevOps

I hope this helps someone ..

Until then keep scripting

Thom

 

Comparing IIS Sites

Recently I’ve run into an issue where I need to compare one site of the same application type to another server in another environment.  I decided to try and find a way to detect where the configuration is and then automatically launch a Compare tool.   I decided to use WinMerge. So this article will be about the scripting I wrote to get to the end goal comparing one site to another.

So I started with a function that gets the local environment variables most of the function I got from this post https://powershell.org/friday-fun-expand-environmental-variables-in-powershell-strings/ . I expanded on this function allowing for a computerName to be specified to allow for this computer name to be specified i had to write a function to test whether the computer was a local computer or not.  So i added a function called Get-LocalRemoteComputer

   <#
 .SYNOPSIS
 Determines if the name passed is the localhost or not
 
 .DESCRIPTION
 If the name passed is the localhost then the script will send back
 the computername: 
 
 .example
 get-localremotecomputer -computername .
 yourmachinename
 get-localremotecomputer -computername 127.0.0.1
 yourmachinename
 get-localremotecomputer -computername servername
 servername
 
 .PARAMETER computername
 A description of the computername parameter.
 
 .NOTES
 Additional information about the function.
#>
function Get-LocalRemoteComputer
{
 param
 (
 [string]$computername
 )
 
 if ($computername -eq '.' -or ($env:COMPUTERNAME -eq $computername)`
 -or ($computername -eq 'Localhost') -or ($computername -eq '127.0.0.1'))
 {
 $computername = $env:COMPUTERNAME
 $computername
 }
 else
 { $computername }
}

Now to explain what this function does.

I simply get the value of the computer name and compare it to the environment variable $env:computername or to 127.0.0.1 or local host. If it is the current computer you are running on it returns the computer name.

Else it returns the computername passed.

Now that i have the computer name that I’m going to operate on now I can get the remote variables from the machine by issuing a Invoke-Command with the computer name as shown below:

(invoke-command -ComputerName $computername -ScriptBlock `
 { param ([string]$t)get-item env:$t -ErrorAction SilentlyContinue } `
-ArgumentList $text).value

The modified function (allowing computernames)  from this post https://powershell.org/friday-fun-expand-environmental-variables-in-powershell-strings/  is shown below:

function Resolve-EnvVariable
{
 [CmdletBinding()]
 param
 (
 [Parameter(Mandatory = $true,
 ValueFromPipeline = $true,
 Position = 0,
 HelpMessage = 'Enter a string that contains an environmental variable like %WINDIR%')]
 [ValidateNotNullOrEmpty()]
 [string]$String,
 [string]$computername
 )
 #https://powershell.org/friday-fun-expand-environmental-variables-in-powershell-strings/
 Begin
 {
 $computerName = Get-LocalRemoteComputer -computername $computerName
 Write-Verbose "Starting $($myinvocation.mycommand)"
 
 } #Begin
 
 Process
 {
 #if string contains a % then process it
 if ($string -match "%\S+%")
 {
 Write-Verbose "Resolving environmental variables in $String"
 #split string into an array of values
 $values = $string.split("%") | Where { $_ }
 foreach ($text in $values)
 {
 #find the corresponding value in ENV:
 Write-Verbose "Looking for $text"
 if ($env:COMPUTERNAME -ne $computername)
 {
 [string]$replace = (invoke-command -ComputerName $computername -ScriptBlock `
{ param ([string]$t)get-item env:$t -ErrorAction SilentlyContinue }`
 -ArgumentList $text).value
 #(Get-Item env:$text -erroraction "SilentlyContinue").Value
 if ($replace)
 {
 #if found append it to the new string
 Write-Verbose "Found $replace"
 $newstring += $replace
 }
 else
 {
 #otherwise append the original text
 $newstring += $text
 }
 $newstring -match '\w:' | out-null
 if ($Matches)
 {
 $driveLetter = ($Matches.values).trim(':')
 $newstring = $newstring -replace '\w:', "\\$computername\$driveletter$"
 }
 }
 else
 {
 [string]$replace = (Get-Item env:$text -erroraction "SilentlyContinue").Value
 if ($replace)
 {
 #if found append it to the new string
 Write-Verbose "Found $replace"
 $newstring += $replace
 }
 else
 {
 #otherwise append the original text
 $newstring += $text
 }
 }
 } #foreach value
 
 Write-Verbose "Writing revised string to the pipeline"
 #write the string back to the pipeline
 Write-Output $NewString
 } #if
 else
 {
 #skip the string and write it back to the pipeline
 Write-Output $String
 }
 } #Process
 
 End
 {
 Write-Verbose "Ending $($myinvocation.mycommand)"
 } #End
} #end Resolve-EnvVariable
 

When you call this function in the begin block it calls the function described above and gets the name of the machine for  use in the rest of the function

 Begin
 {
 $computerName = Get-LocalRemoteComputer -computername $computerName
 Write-Verbose "Starting $($myinvocation.mycommand)"
 
 } #Begin

I needed this function to be able to parse the xml from IIS.  The IIS schema has the location of the config files in %variable% name fashion.

Now onto the next function.  The next function retrieves the current location of the installed IIS instance.  In the rare case it’s not in c: it will resolve what the current install is of the host and use that drive letter instead.

<#
 .SYNOPSIS
 Gets the current directory for IIS for the machine that is passed.
 
 .DESCRIPTION
 gets the current installed location of iis from the IIS 
 
 .PARAMETER computername
 string for computername that we want to find the current installation of iis
 
 .EXAMPLE
 Get-CurrentIISConfiguration -computername test
 returns: \\test\c$\windows\system32\inetsrv\config
#>
function Get-CurrentIIsConfiguration
{
 [OutputType([string])]
 param
 (
 [string]$computername
 )
 
 $computerName = Get-LocalRemoteComputer -computername $computerName
 if ($computerName -ne $env:COMPUTERNAME)
 {
 $startingdir = "\\$computername\c$\windows"
 }
 else
 {
 $startingdir = $env:windir
 }
 $mostRecentDir = "$startingdir\system32\inetsrv\config"
 $mostRecentDir
}

Now that I have the ccurrent installation of iis I can find where the history location of the most recent backup of iis is. I did this so if i need to compare an application host config from a backup to the most recent version I could do so. The name of this function is called Get-LastIISBackupLocation

<#
 .SYNOPSIS
 Gets the current backup location of IIS Configs
 
 .DESCRIPTION
 Gets the latest directory that contains the last IIS backup
 If you do not specify an index number then the function will get index 0 of the 
number of backups.  Each backup in IIS is a seperate directory the script 
determines how many there are and sets the first index number to the most 
recent backup.
 
 .PARAMETER computername
 this is as string value that represents the comptutername that we want to 
find the backups for
 
 .PARAMETER index
 This is a integer(16) value that represents the index number of the backup
 you want to retrieve. the most recent backup is index 0. 
 
 .NOTES
 This returns a string object of the backup location for the computername passed.
 .Example
 PS PS:\> Get-LastIISBackupLocation -computername test 1
 returns: \\test\C$\Windows\system32\inetsrv\backup\2016-07-14
 .Example 2
 PS PS:\> Get-LastIISBackupLocation -computername test 0
 \\test\C$\Windows\system32\inetsrv\backup\2016-07-15
#>
function Get-LastIISBackupLocation
{
 [OutputType([string])]
 param
 (
 [string]$computername,
 [int16]$index = '0'
 )
 
 $computerName = Get-LocalRemoteComputer -computername $computerName
 if ($computerName -ne $env:COMPUTERNAME)
 {
 $startingdir = Resolve-EnvVariable -String '%systemroot%' -computername $computername
 }
 else
 {
 $startingdir = $env:windir
 }
 $mostRecentDir = dir ("$startingdir\system32\inetsrv\backup") | Sort-Object -Property Lastwritetime -Descending | select -Index $index
 $mostRecentDir.fullname
}

Now that I have the last backup location i wanted to also be able to get the last incremental backup location.  When an administrator uses IIS everytime a configuration change is made IIS records that and puts a copy in the on most machines its in the c:\inetpub\history\cfghistor_XXXXX direcotry.

<#
 .SYNOPSIS
 Provides a means to get the current backup location for iis changes
 
 .DESCRIPTION
 Changes made to the IIS instance are recorded in a history config file. This function provides the means to retreive where it is on the machine
 
 Get-IISSystemHistoryLocation -index '2' 
 
 .PARAMETER index
 if a value for index is passed it'll get the x number from the collection of
 history backups. For instance if there are 5 backups and the index number
 passed is 2 then it'll get the next backup from the latest.
 
 .NOTES
 Additional information about the function.
#>
function Get-IISSystemHistoryLocation
{
 [OutputType([string])]
 param
 (
 [string]$computerName,
 [Parameter(Mandatory = $false)]
 [int16]$index = '1'
 )
 $computerName = Get-LocalRemoteComputer -computername $computerName
 if($computerName -ne $env:COMPUTERNAME)
 {
 $startingdir = Resolve-EnvVariable -String '%systemroot%' -computername $computername
 }
 else
 {
 $startingdir = $env:windir
 }
 $config = [xml](get-content `
 "$startingDir\system32\inetsrv\config\schema\IIS_schema.xml")
 $configHistory = (($config.configSchema.sectionschema`
 | where{ $_.name -like "*configHistory*" }).attribute | ?{ $_.name -like "path" }).defaultvalue
 $envVar = $configHistory | Resolve-EnvVariable -computername $computername
 $configDir = dir $envVar | Sort-Object -Property lastwritetime -Descending | select -Index $index
 $configd = $configdir.fullname
 $configd -match '\w:' | out-null
 if($Matches)
 {
 $driveLetter = ($Matches.values).trim(':') 
 $serverConfig = $configd -replace '\w:', "\\$computername\$driveletter$"
 $configd = $serverConfig
 }
 $configd
}

Now that I have the functions for getting the current installation, the current backup, and the history.  I can now use this information to pass onto winmerge so i can see the compare locally on my desktop.  to accomplish this I wrote a function specifically for winmerge that accepts a source and difference file and opens winmerge.

<#
 .SYNOPSIS
 takes to files passed and sends them to winmerge for comparison
 
 .DESCRIPTION
 This function will pass the source and differnece file and then launch winmerge.
 
 .PARAMETER sourceFile
 file used to be the source of the comparison
 
 .PARAMETER diffFile
 file that is the difference file
 
 .PARAMETER winMergeLoc
 Physical location of the exe for winmerge. This function will pass the source and differnece file and then launch winmerge.
 
 .NOTES
 Additional information about the function.
#>
function Compare-IISConfigsWinMerge
{
 param
 (
 [string]$sourceFile,
 [string]$diffFile,
 [string]$winMergeLoc
 )
 
 if (test-path $winMergeLoc)
 {
 if (test-path $sourcefile)
 {
 if (test-path $difffile)
 {
 & "$winmergeLoc " "$sourcefile " "$diffFile"
 $results = $true
 }
 else { $results = 'bad diff file' }
 }
 else {$results = 'bad source file'}
 }
 else {$results = 'bad winmerge location' }
 $results
}

The full script can be found on my github account in this gist.

Below is a screen shot of this comparison:

 

 

I hope this helps someone ..

Until then keep scripting

Thom

 

 

Cloning Vm’s in Hyper V

Nice Article on FoxDeploy about cloning vms in hyper V

It’s a common enough scenario: build one template machine and then mirror that to make a testlab, so you’d think this would be a built-in feature of Hyper-V, but its not. Luckily, it’s not too hard to do once you know how, and I’ll guide you through the tough parts Overall process We’ll be following […]

via Cloning VMs in Hyper-V — FoxDeploy.com