Parsing CCM/Logs – Part 2 using a Dynamic Parameter

In the first Blog post Parsing CCM\Logs I showed you how I was able to get a Script from the community and make a few tweaks to allow for it to parse logs for CCM.  In this blog post I’m going to show you how I took the next step.   My next step was to use the parsing logic in a script and incorporate a Dynamic Parameter.

To begin with I wanted to have the user not have to go to the machine and find every log for CCM\logs and then type in the value of that log name.  For instance in my directory for c:\windows\ccm\logs there were 178 files with the .log extension.  Trying to create a Validate set for this many logs is also problematic.  So I chose the dynamic Parameter approach for this function.  Now on to the script.

The first portion of the script is standard parameter values.


param([Parameter(Mandatory=$true,Position=0)]$ComputerName = '$env:computername', [Parameter(Mandatory=$true,Position=1)]$path = 'c:\windows\ccm\logs')

The next portion of this script is where the “magic” is.  The next parameter -log is created dynamically from the first two variables ($computerName, $path).


DynamicParam
{
$ParameterName = 'Log'
if($path.ToCharArray() -contains ':')
{

$FilePath = "\\$ComputerName\$($path -replace ':','$')"
}
else
{
$FilePath = "\\$computerName\$((get-item $path).FullName -replace ':','$')"
}

$logs = Get-ChildItem "$FilePath\*.log"
$LogNames = $logs.basename

$logAttribute = New-Object System.Management.Automation.ParameterAttribute
$logAttribute.Position = 2
$logAttribute.Mandatory = $true
$logAttribute.HelpMessage = 'Pick A log to parse'

$logCollection = New-Object System.Collections.ObjectModel.Collection[System.Attribute]
$logCollection.add($logAttribute)

$logValidateSet = New-Object System.Management.Automation.ValidateSetAttribute($LogNames)
$logCollection.add($logValidateSet)

$logParam = New-Object System.Management.Automation.RuntimeDefinedParameter($ParameterName,[string],$logCollection)

$logDictionary = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary
$logDictionary.Add($ParameterName,$logParam)
return $logDictionary

}

To explain what is going on here I’ll start with the code I used to get me to this point. Martin Schvartzman wrote a great article that showed how to do most of what I’ve posted here Dynamic ValidateSet in a Dynamic Parameter. 

I’ll do my best to explain how his code works.  First step is the DynamicParam Statement.  This tells PowerShell that we are going to create a dynamic Parameter.  Very simply stated a dynamic parameter is a Parameter that is added at runtime only when needed.

The first thing that is done in this dynamic parameter is to create a Runtime Defined Parameter Dictionary.  In order to add a runtime parameter we have to define the parameter and it’s attributes and then add it to the collection to return to the runtime so it will be added properly.

<pre>System.Management.Automation.RuntimeDefinedParameterDictionary</pre>

This next portion of the code creates a object that will contain our parameter attributes.


$logAttribute = New-Object System.Management.Automation.ParameterAttribute

For the purposes of this script we are only going to make the parameter mandatory, set it’s position in the pipeline, and create a help message.  There are other items that can be defined if required.  We can see this by getting the members of the $logAttribute:


$logAttribute  | get-member -properties

TypeName: System.Management.Automation.ParameterAttribute

Name MemberType Definition
---- ---------- ----------
DontShow Property bool DontShow {get;set;}
HelpMessage Property string HelpMessage {get;set;}
HelpMessageBaseName Property string HelpMessageBaseName {get;set;}
HelpMessageResourceId Property string HelpMessageResourceId {get;set;}
Mandatory Property bool Mandatory {get;set;}
ParameterSetName Property string ParameterSetName {get;set;}
Position Property int Position {get;set;}
TypeId Property System.Object TypeId {get;}
ValueFromPipeline Property bool ValueFromPipeline {get;set;}
ValueFromPipelineByPropertyName Property bool ValueFromPipelineByPropertyName {get;set;}
ValueFromRemainingArguments Property bool ValueFromRemainingArguments {get;set;}

Since we are going to need this set of attributes in our parameter we need to add this to the Attribute collection ($logCollection) that will in turn be added to the runtime Parameter $logParam.

Next we’ll create our Validate set item from the list of logs on the remote machine which was gathered with the FilePath variable then added to a new object that will contain our ValidateSet attributes. Then add it to our LogCollection.


$FilePath = "\\$ComputerName\$($path -replace ':','$')"

<br>

$logValidateSet = New-Object System.Management.Automation.ValidateSetAttribute($LogNames)

 $logCollection.add($logValidateSet)

Finally we’ll add our parameter name and LogCollection to a Runtime Defined parameter.  Then put this all in our Runtime Defined Parameter Dictionary. Then hand it back to PowerShell.


$logParam = New-Object System.Management.Automation.RuntimeDefinedParameter($ParameterName,[string],$logCollection)

$logDictionary = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary
 $logDictionary.Add($ParameterName,$logParam)
 return $logDictionary

Now  that we have the Full explanation of the Dynamic Parameter we can stitch our previous Log Parser together with this function to give us back any one of the logs on our remote machine.  We’ll put this in our Process block of our function:


 $sb2 = "$((Get-ChildItem function:get-cmlog).scriptblock)`r`n"
 $sb1 = [scriptblock]::Create($sb2)
 $results = Invoke-Command -ComputerName $ComputerName -ScriptBlock $sb1 -ArgumentList "$path\$log.log"
 [PSCustomObject]@{"$($log)Log"=$results}

Now when we call Get-CcmLog we’ll get a return with a parsed log that has Log appended in the object name.

dynparam3

Full code is posted in a gist here:

Advertisements

Parsing CCM\Logs

If you’ve ever worked with Configuration manager you’ll understand that there are quite a few logs on the Client side.  Opening and searching through them for actions that have taken place can be quite a task.  I needed to find when an item was logged during initial startup/build of a vm.  So I sought out tools to parse these logs to find out the status of  Configuration Manager client side. This post is about the tools/scripts I found and what I added to them to make it easier to discover and parse all the log files.

I started with the need to be able to just parse the log files.  I discovered that Rich Prescott in the community had done the work of parsing these log files with this script:

http://blog.richprescott.com/2017/07/sccm-log-parser.html

With that script in had I made two changes to the script.  The first change was to allow for all the files in the directory to be added to the return object.

 if(($Path -isnot [array]) -and (test-path $Path -PathType Container) )
{
$Path = Get-ChildItem "$path\*.log"
}

The second change allowed for the user to specify a tail amount. This allows for just a portion of the end of the log to be retrieved instead of the entire log.   That script can be found on one of my gists at the Tail end of this article.

 if($tail)
{
$lines = Get-Content -Path $File -tail $tail
}
else {
$lines = get-Content -path $file
}
ForEach($l in $lines )

 

I hope this helps someone.

Until then

Keep scripting

Thom

TNSNames File Parsing

If you’ve ever worked with  Oracle you are familiar with Oracle’s TNSNAMES file. This file describes how to get to a database.   With ODP.Net it doesn’t provide a means to parse the TNSNAMES.ora file and then in turn use it with ODP.Net.  From everything I’ve read you must just copy from the Description() and put Data Source = Description(). Then you can use that as a means to connect to your Oracle Database server.    With that in mind I set out to write some scripting to help with this problem.

The first thing I did was to follow this great article by the Scripting Guys about how to use ODP.NET.  After reading that article I found a great module on the Gallery that implemented much of what is spoken about there and I’ll be using that module here in this posting (SimplySQL)

Now I know where my  TNSNAMES.ora file is located so I’ll bring it into my session with:

$tnsnamesPath = 'c:\tns\tnsnames.ora'
$tn = get-content $tnsnamesPath -raw

I brought the file in -raw so that I knew I would have a full object.  Now with some REGEX I can get this file in to the fashion I want.  First to look for the common string in my TNSNAMES.ora file somename = (Description = . 

$parsedTN = $tn -replace '(.*\=.*|\n.*\=)(.*|\n.*)\(DESCRIPTION*.\=' ,'Data Source = (DESCRIPTION ='

Now that I have the connection Name replaced with Data Source = I can now split it into an array and then select my connection based on that array:

$splitTN = $parsedTN -split '(?=.*Data Source = \(DESCRIPTION \=)' 
$splitTN.count
3

$splitTN[1]
Data Source = (DESCRIPTION =
 (ADDRESS_LIST=
 (ADDRESS = (PROTOCOL = TCP)(HOST = server3)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = server58)(PORT = 1521)))
 (LOAD_BALANCE = YES)(CONNECTION_TIMEOUT=5)(RETRY_COUNT=3)
 (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = ketchup)
 (FAILOVER_MODE = (TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5)))
 )

Now that  I have the connections split into an array I can now select the one I want using Where-Object -like “myconnectionName“.  Then with this handy commandlet Open-OracleConnection From this module (simplySQL) , all i have to do next is pass in my username and password and that should open My oracle connection.

$tnsnames = $splitTN |?{$_ -like "*$connectionName*"}
$connstring = "$tnsnames;User Id=$username;Password=$password"
Open-OracleConnection -ConnectionString $connstring -ConnectionName $connectionName

Below is the full script in a GitHub Gist:

I hope this helps someone.

 

Until then

 

Keep scripting

 

Thom

 

[QuickScript] Find out what that MAC is

I wanted to find out each mac address was on my router.  So I decided to find out what was available for a given IP Address.  What I found was there is an API that you can query to Get information about what company owns that mac address.

Now to see how we query and get that information from the API:

According to the site: http://macvendors.co/api/

we only need to Query the api and pass it a mac address and then pass in the url either JSON or XML:


invoke-restmethod -uri http://macvendors.co/api/58:EF:68:00:00:00/json | select result

result
------
@{company=Belkin International Inc.; mac_prefix=58:EF:68; address=12045 East Waterfront Drive,Playa Vista 90094,U...

Without a json Tag


(invoke-restmethod -uri http://macvendors.co/api/7C:01:91:00:00:00).result

company : Apple, Inc.
mac_prefix : 7C:01:91
address : 1 Infinite Loop,Cupertino CA 95014,US
start_hex : 7C0191000000
end_hex : 7C0191FFFFFF
country : US
type : MA-L

Telling the API to return XML


(invoke-restmethod -uri http://macvendors.co/api/58:EF:68:00:00:00/Xml).result

company : Belkin International Inc.
mac_prefix : 58:EF:68
address : 12045 East Waterfront Drive,Playa Vista 90094,US
start_hex : 58EF68000000
end_hex : 58EF68FFFFFF
country : US
type : MA-L

As you can see getting the results already comes in Json and or form of an object so getting this with PowerShell is pretty straightforward.

 

Hope this helps Someone

 

Until then

 

Keep Scripting

 

thom

DacPac to Folders with PowerShell

A Question was posed on Stackoverflow.  How do you create a folder structure from a created DacPac or BacPac.  This article is how I went about doing this with PowerShell.

The first means was to try and find someone who’d tried this before. The best I could find searching was this article: Deploy DACPACs with PowerShell.  This Script does an excellent job of showing how you can use the SQL 2014 DLL’s to create a Script that can than be deployed to your database.   This did not answer the question though.

Thankfully one of the participants in the question was kind enough to show how to do this very thing in C# so I took their pseudo code and turned it into PowerShell Code.


using (TSqlModel modelFromDacpac = new TSqlModel(dacpacPath))
{
 IEnumerable<TSqlObject> allObjects = model.GetObjects(QueryScopes);
 foreach (TSqlObject tsqlObject allObjects)
 {
 string script;
 if (tsqlObject.TryGetScript(out script))
 {
 // Some objects such as the DatabaseOptions can't be scripted out.

// Write to disk by object type
 string objectTypeName = tsqlObject.ObjectType.Name;
 // pseudo-code as I didn't bother writing.
 // basically just create the folder and write a file
 this.MkdirIfNotExists(objectTypeName);
 this.WriteToFile(objectTypeName, tsqlObject.Name + '.sql', script;
 }
 }
}

 

Starting at the top of the script I need to translate the using Statement into a New-object in PowerShell. In order to do that I needed to find what Dot net class TSQLModel was in.  Based on that research I found that I needed to add the type to my session(Microsoft.SqlServer.Dac.Extensions.dll).  Once the type was added I was then able to get the model from my DacPac.

<br>

add-type -path 'C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\Microsoft.SqlServer.Dac.Extensions.dll'

$model =[Microsoft.SqlServer.Dac.Model.TSqlModel]::new(((get-item ".\$dacpac").fullname))

 

Now that I have the Model of my dacpac I need to figure out how to make this into a PowerShell piece of code IEnumerable<TSqlObject> allObjects = model.GetObjects(QueryScopes);

I know my return type is IEnumerable Tsql object.. Now the question is how do I query my model and get that return object.  Based on the C# code I need to call GetObjects. GetObjects expects you to pass an object type (QueryScopes variable) and optionally you  can pass the Object Identifier ID or the Object type.   The queryScopes is an enumeration that has the following values(All, Builtin, Default, None, SameDatabase, System, UserDefined). I chose the All so I could see what this method would return.

 

$returnObjects = $model.GetObjects([Microsoft.SqlServer.Dac.Model.DacQueryScopes]::All)

Now the next step is to step through the return result and test each item to see if it can be scripted.  When calling the TryGetScript it has an output so we must declare a variable before we call this method.

<br>

$s = ''
foreach($r in $returnObjects)
{
 if ($r.TryGetScript([ref]$s))
 {
 $objectTypeName = $r.ObjectType.Name;
 $d="c:\temp\db\$objectTypeName"
 if(!(test-path $d ))
 {
 new-item $d -ItemType Directory
 }
 $filename = "$d\$($r.Name.Parts).sql"

 if(! (test-path $filename))
 {new-item $filename -ItemType File}
 $s | out-file $filename -Force
 write-output $filename
 }

}

 

I found when I ran this that it would error on creation of a directory.

dactest.ps1 (28, 10): ERROR: At Line: 28 char: 10
ERROR: + $s | out-file $filename -Force
ERROR: + ~~~~~~~~~~~~~~~~~~~~~~~~~~
ERROR: + CategoryInfo : OpenError: (:) [Out-File], NotSupportedException
ERROR: + FullyQualifiedErrorId : FileOpenFailure,Microsoft.PowerShell.Commands.OutFileCommand
ERROR:

In order to find this exception I decided to put a Try Catch around the act of creating the file:

Try
 {
 new-item $filename -ItemType File
 }
 Catch
 {
 "Filename error $filename"
 }

 

 

What I found after putting the Try catch in place was that the exception was be cause the object names were a Url:

Filename error c:\temp\db\Service\http://schemas.microsoft.com/SQL/Notifications/EventNotificationService.sql

In order to fix this I implemented a test to see if the item name was a url

[system.uri]::IsWellFormedUriString(‘http://schemas.microsoft.com/SQL/Notifications/EventNotificationService.sql’, [system.uri]::IsWellFormedUriString)

Now that I know that the filename is a uri I can parse and get the last item in the uri for the filename:


$url = "$($r.Name.Parts)"
 if ([system.uri]::IsWellFormedUriString($url, [system.urikind]::Absolute))
 {
 $u = ([uri]"$url").Segments[-1]
 $filename = "$d\$u.sql"
 new-item $filename -ItemType File -ErrorAction Stop -Force
 }

Example output

Directory: C:\temp\db

Mode LastWriteTime Length Name
—- ————- —— —-
d—– 3/5/2018 7:10 PM Assembly
d—– 3/6/2018 9:04 AM Contract
d—– 3/5/2018 7:18 PM DataType
d—– 3/5/2018 7:19 PM Endpoint
d—– 3/5/2018 7:19 PM Filegroup
d—– 3/6/2018 9:04 AM MessageType
d—– 3/5/2018 7:20 PM Queue
d—– 3/5/2018 7:20 PM Role
d—– 3/5/2018 7:20 PM Schema
d—– 3/6/2018 9:04 AM Service
d—– 3/5/2018 7:20 PM Table
d—– 3/5/2018 7:20 PM User
d—– 3/5/2018 7:20 PM UserDefinedType
-a—- 3/5/2018 7:06 PM 0 [Microsoft.SqlServer.Types].sql

Directory: C:\temp\db\Schema

Mode LastWriteTime Length Name
—- ————- —— —-
-a—- 3/6/2018 9:11 AM 54 dbo.sql
-a—- 3/6/2018 9:11 AM 76 db_accessadmin.sql
-a—- 3/6/2018 9:11 AM 82 db_backupoperator.sql
-a—- 3/6/2018 9:11 AM 74 db_datareader.sql
-a—- 3/6/2018 9:11 AM 74 db_datawriter.sql
-a—- 3/6/2018 9:11 AM 70 db_ddladmin.sql
-a—- 3/6/2018 9:11 AM 82 db_denydatareader.sql
-a—- 3/6/2018 9:11 AM 82 db_denydatawriter.sql
-a—- 3/6/2018 9:11 AM 64 db_owner.sql
-a—- 3/6/2018 9:11 AM 80 db_securityadmin.sql
-a—- 3/6/2018 9:11 AM 58 guest.sql
-a—- 3/6/2018 9:11 AM 84 INFORMATION_SCHEMA.sql
-a—- 3/6/2018 9:11 AM 54 sys.sql

 

 

The entire script is posted in a gist:

The Power of the Round Table (AZ PowerShell)

Last night we had some technical difficulties with our user group and getting the broadcast and the speaker setup and going.    So we had to make up something good to talk about in the user group.

So what we ended up doing was what I like to call a round table.  In this discussion everyone relayed some of their successes with PowerShell.   This article is just a “glimpse’ of some of the tidbits I was able to capture during the meeting.

If you are anything like me you like things that you can do to help you remember a command you last typed or you like to bring back a command from history and modify it slightly and try again.   One of the users last night showed us this wonder full trick in powershell the #(tab).

The best way to show this feature is to tell you to use the Get-History cmdlet in PowerShell to show your last few Items you’ve typed.


PSGit:\> Get-History

Id CommandLine
-- -----------
1 $env:COMPUTERNAME
2 h
3 $env:PSModulePath
4 $env:PSModulePath -split ';'
5 h
6 ($env:PSModulePath -split ';')[0]
7 ($env:PSModulePath -split ';')[1]
8 ($env:PSModulePath -split ';')[2]
9 h
10 get-process
11 h
12 get-service

I only show the history so there is context around what this little gem of a tip does. if I type #PS and then hit tab I’ll get each item from history that contains PS.  Demonstrated below:

poundtip

Now for another tip that I found useful as well.  Have you ever wanted  to create a variable and have the output of the variable on the screen as well.  This can be done with Tee-Object but there is a much shorter method that one of the users in the AZ Powershell User group demonstrated:


PSGit:\> ($var = ($env:PSModulePath -split ';')[0])
C:\Users\crshn\Documents\WindowsPowerShell\Modules

PSGit:\> $var
C:\Users\crshn\Documents\WindowsPowerShell\Modules

Simply enclose your command in parens and you get the output in your variable and to your screen.

We had a great discussion about this post Merging hashtables. This spurned a discussion on a very cool means to copy your object intact to another object.  The participant in the user group informed me that he’d share his code with me.  When I receive it I’ll add to this Post.

Lastly another user demonstrated how they use data from the perfmon reliability counters that every windows machine has.   you can view those reliability counters through a simple command at your prompt:

perfmon /rel

Turns out these counters are part of WMI and you could drill into these to help you with diagnosing problems in your infrastructure through Powershell and WMI.

Here is a post from Richard Siddaway on how he used some of the items in the class that this provides:

https://richardspowershellblog.wordpress.com/2015/09/29/win32_reliabilityrecords-class

It was a very fun Users group. If you are online or in Phoenix area Drop by and we’ll entertain you with a speaker or any of the great folks that attend.

 

Until then

Keep Scripting

 

Thom

UCS Director – Creating a Custom Workflow Task for PowerShell

If you are familiar with UCS director then you know you can create custom workfow tasks for anything that is JavaScript based.  I haven’t seen a means to do this for a PowerShell Script.  This is how I figured this out.

Note: Without the help of my co-worker  Don Reilly the task would have never worked.  He was able to find the correct methods to call for Director. 

First thing I did was to clone the in box PowerShell Task.

2018-01-05 09_11_59-Clipboard

That brings up a dialog to allow you to choose to clone from tasks that are already there

2018-01-05 09_15_01

Now that I have that cloned I can look at the contents of the javascript to find out how to call my script.   The script I chose to run is one from another community member.  His script gets the last error from the PowerShell agent.  I took his script and saved it to my PSA server in d:\director\powershell\director folder and am using this custom work flow task to call this when there is an error.

On to the rest of the setup.   Once  I cloned the script and then put in the code that I needed with the necessary input’s from the director custom task.  I found that the task wouldn’t run.  Researching further I found that when you clone the task the PowerShell Task itself is running differently than any other custom task.  You can see this behaviour in the screen shot below:

2018-01-05 09_23_22

So I then had to goto my resident expert Don Reilly who helped me with discovering the right controller to add to my custom workflow task.  I’ll step through in pictures what my work flow in UCS Director 6.5 looks like:

2018-01-05 09_26_58

2018-01-05 09_28_50

The input for tasks has one property that i chose to just enter in its values.  In the LOV values for OutputFormat I entered in the values of XML and JSON.

2018-01-05 09_28_50-2

2018-01-05 09_28_50-3

2018-01-05 09_28_50-4

2018-01-05 09_28_50-5

The method added for the controller is before martial with the following calls to the clopia libs:


importPackage(com.cloupia.feature.powershellAgentController.wftasks);

var agentPairs = PSAgentTabularLOV.getAllPowerShellAgentsLOV();
page.setEmbeddedLOVs(id + ".psAgent", agentPairs);

Here is what my script code looks like in the custom task (Powershell is highlighted in blue in the Java script).

For Clarity here is the script$s = “D:\Director\powershell\director\Get-LastUCSDError.ps1”; if(test-path $s){. $s;}else{“Cannot find $s check path on PowerShell Agent Server”}


// Auto generated to code invoke following task
// Task Label:  Execute Native PowerShell Command
// Task Name:  Execute Native PowerShell Command
importPackage(java.lang);
importPackage(java.util);
importPackage(com.cloupia.model.cIM);
importPackage(com.cloupia.service.cIM.inframgr);

function Execute_Native_PowerShell_Command()
{
    var task = ctxt.createInnerTaskContext("Execute Native PowerShell Command");

    // Input 'Label', mandatory=true, mappableTo=
    task.setInput("Label", input.label);

    // Input 'PowerShell Agent', mandatory=true, mappableTo=gen_text_input
    task.setInput("PowerShell Agent", input.psAgent);

    // Input 'Hide Input in PSA, inframgr logs', mandatory=false, mappableTo=gen_text_input
    task.setInput("Hide Input in PSA, inframgr logs", input.isHideInput);

    // Input 'Hide Output in PSA, inframgr logs', mandatory=false, mappableTo=gen_text_input
    task.setInput("Hide Output in PSA, inframgr logs", input.isHideOutput);

    // Input 'Commands/Script', mandatory=true, mappableTo=gen_text_input
    //Changed the command to be hard coded to a script on the
	var command = '$s = "D:\\Director\\powershell\\director\\Get-LastUCSDError.ps1"; if(test-path $s){. $s;}else{"Cannot find $s check path on PowerShell Agent Server"}';
    task.setInput("Commands/Script", command);

    // Input 'Commands/Rollback Script', mandatory=false, mappableTo=gen_text_input
    task.setInput("Commands/Rollback Script", '');

    // Input 'Output Format', mandatory=false, mappableTo=
    task.setInput("Output Format", input.outputFormat);

    // Input 'Depth', mandatory=true, mappableTo=
    task.setInput("Depth", input.depth);

    // Input 'Maximum Wait Time', mandatory=true, mappableTo=
    task.setInput("Maximum Wait Time", input.maxWaitTimeMinutes);

    // Now execute the task. If the task fails, then it will throw an exception
    task.execute();

    // Now copy the outputs of the inner task for use by subsequent tasks
    // Type of following output: gen_text_input
    output.POWERSHELL_NATIVE_COMMAND_RESULT = task.getOutput("POWERSHELL_NATIVE_COMMAND_RESULT");
}

// Invoke the task
Execute_Native_PowerShell_Command();
<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

Monitor your Connection to Internet – PowerShell

Recently I’ve been having issues with my internet connection.  So I decided to Write a script to monitor my connection and then record how long my connection drops to my Internet Service Provider.

To start this process out I had to make sure that I could ping the gateway of the adapter that I’m using to connect to the internet with.  So first step was to find my IP address and gateway.  I was able to do this using Get-NetIPConfiguration.


Get-NetIPConfiguration -InterfaceAlias 'vEthernet (ExternalSwitch)'

InterfaceAlias : vEthernet (ExternalSwitch)
InterfaceIndex : 11
InterfaceDescription : Hyper-V Virtual Ethernet Adapter
NetProfile.Name : Conn
IPv4Address : 192.168.1.12
IPv6DefaultGateway :
IPv4DefaultGateway : 192.168.1.1
DNSServer : 192.168.1.1

This told me my address and my Gateway.  So I put them in a variable:


$IP = (Get-NetIPConfiguration -InterfaceAlias 'vEthernet (ExternalSwitch)').ipv4address.ipaddress

$gateway = (Get-NetIPConfiguration).ipv4defaultGateway.nexthop

Now that I have them in a variable I can begin the process of  Pinging both addresses.  I chose to add this to a Function:


function Start-ConnectionMonitoring
{
param($isp, $gateway, $Logfile,[int]$Delay = 10,[Ipaddress] $adapter, [switch]$ispPopup, [switch]$gateWayPopup)
$spacer = '--------------------------'
while($true)
{
if(!(Test-Connection $gateway -source $adapter -count 1 -ea Ignore))
{
get-date | Add-Content -path $Logfile
"$gateWay Connection Failure" |add-content -Path $Logfile
$outagetime = Start-ContinousPing -address $gateway -adapter $adapter -Delay $Delay
"Total Outage time in Seconds: $outageTime" | Add-Content -path $Logfile
if($gateWayPopup)
{
New-PopupMessage -location $gateway -outagetime $outagetime
}
$spacer |add-content -Path $Logfile
}
if((!(Test-Connection $isp -Source $adapter -count 1 -ea Ignore)) -and (Test-Connection $gateway -count 1 -ea Ignore))
{
get-date | Add-Content -path $Logfile
"$isp Connection Failure" | Add-Content -Path $Logfile
$outagetime = Start-ContinousPing -address $isp -adapter $adapter -Delay $Delay
"Total Outage time in Seconds: $outageTime" | Add-Content -path $Logfile
if($ispPopup)
{
New-PopupMessage -location $isp -outagetime $outagetime
}
$spacer|add-content -Path $Logfile
}
Start-Sleep -Seconds $Delay
}
}

In this function I have two Nested functions.  I’ll explain the first function(Start-ContinousPing).  If the connection/ping to either the local router ($gateway) or ($isp) fails then we call this function.  This puts the Ping/connection check in a loop until the connectivity comes back. At the end of the non connectivity the function passes back the seconds that we couldn’t reach that resource.

The Second function (New-PopupMessage) serves as a means to allow the user to choose whether or not they get a popup when there is  a period of no activity.  If the switch -ispPopup is set then when we have no connectivity to the ISP resource we’ll get a popup indicating no connection and how long the connection was out.

Finally we’ll look at the contents of the log:

12/27/2017 4:59:29 PM
192.168.1.1 Connection Failure
Total Outage time in Seconds: 0.0380652
————————–
12/27/2017 4:59:33 PM
http://www.cox.com Connection Failure
Total Outage time in Seconds: 0.0353273
————————–

As you can see the connection to my first gate way was out of .038 seconds. Also the connection to my provider cox.com was out for .035 seconds.

The entire script is located in this gist:

I Hope this helps someone.

 

Until then

Keep Scripting

 

Thom

Merging hashtables

Hash tables in PowerShell are very useful and can be used for a bunch of things.   Recently I had to use some code I found on StackOverflow to Merge hash tables. This post is about my experience and the really cool piece of code that iRon posted on StackOverFlow.

First I need to login to azure and find my application:


add-azurermaccount

get-azurermresource

I see my resource in the Get-azureRmresource so now I know that I can query for it’s app settings using this command:


$myapp = Get-AzureRmWebAppSlot -resourcegroupname myresourcegroup -name myresourcename -slot production

This produces and object that contains all my web application settings in azure for my app in question.  The item i want to work on is the .siteconfig.AppSettings

 

This portion of the object will have the properties of the appsettings in the azure blade as shown below:

Applicationsettings


PS C:\Users\me&amp;gt; $myApp.siteconfig.AppSettings

Name Value


WEBSITE_NODE_DEFAULT_VERSION 6.9.1

Now that I have the current version of what is in my application I now need to see what to do to put new settings in place and not wipe out any existing settings.  The cmdlet to do the addition is Set-AzureRmWebAppSlot.  After looking through the help I can see that I have a parameter that  I can pass for the settings i want called -appSettings.  It like most of the other settings require a hash table:  [[-AppSettings] <Hashtable>]

So The $myApp.SiteConfig.Appsettings is a list:


$appSettings = $Myapp.siteconfig.appsettings
&amp;nbsp;$appsettings -is [pscustomobject]
False
&amp;nbsp;$appsettings.gettype()

IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True List`1 System.Object

$appsettings -is [hashtable]
False

This means I need to convert my object from List`1 to a hashtable so I’ll iterate through it and create a hashtable:


$appSettingsHash = @{}
&amp;nbsp;foreach($k in $appSettings) { $appSettingsHash[$k.name] = $k.value }
&amp;nbsp;$appsettingshash

Name Value
---- -----
WEBSITE_NODE_DEFAULT_VERSION 6.9.1

&amp;nbsp;$appsettingshash -is [hashtable]

True

Ok now that I have my current settings in a hashtable I need to now work with entries that I want to add to a hashtable and then post it.


$appSettings ='{"AppSettings:testkey1": "45test","AppSettings:TestId": "This is a Test Key 28"}'

$newAppSettings = $appSettings | convertfrom-json 

$newAppSettingsHash = @{}
 $newAppSettings.psobject.properties | ForEach-Object { $newAppSettingsHash[$_.Name] = $_.Value }

$newappsettingsHash -is [hashtable]
True

This is where the magic of iRon‘s script comes into play. Since  I need to use this in a deployment from TFS I created the hashtable in Json Format first and then convert the Json format to a [hashtable]. Then I call iRon’s Script with the $newappSettingsHash and the $appsettingsHash. Now I have a merged hashtable that I can now update my application with.


Function Merge-Hashtables([ScriptBlock]$Operator) {
$Output = @{}
ForEach ($Hashtable in $Input) {
If ($Hashtable -is [Hashtable]) {
ForEach ($Key in $Hashtable.Keys) {$Output.$Key = If ($Output.ContainsKey($Key)) {@($Output.$Key) + $Hashtable.$Key} Else {$Hashtable.$Key}}
}
}
If ($Operator) {ForEach ($Key in @($Output.Keys)) {$_ = @($Output.$Key); $Output.$Key = Invoke-Command $Operator}}
$Output
}

$hashtable = $newAppSettingsHash, $appSettingsHash | Merge-Hashtables {$_[0]} 
$results = Set-AzureRmWebAppSlot -AppSettings $hashtable -name $website -ResourceGroupName $resourceGroup -slot $slot
$r = $results.SiteConfig.AppSettings
Write-Output $r

The really cool thing about the Merging of the hashtables function is that you can merge more than 2 hash tables.  See this comment from iRon about how it works:

 

hashtablemerge

Full code for this merge hashtable function against a azure application is below:

I hope that when you need to merge hashtables this article makes it a  bit easier for you.

 

Until then keep scripting

 

thom

Uploading files to Azure Applications (kudu)

I needed to copy some content to my azure application that the Build and deploy that I constructed for it wouldn’t need to do every deploy every time.   So my quest began on how do I upload files to an Azure application.  The most common and recognized way of uploading files to azure applications is through webdeploy. I didn’t think I needed to package up and use webdeploy so I sought out a way to do this with PowerShell.  This post is about that pursuit.

Thanks to this article most of the work was done Copy files to Azure Web App with PowerShell and Kudu API.  All I needed to do was to put a loop around my file upload and use Octavie van Haaften‘s scripts.

So I started with get-childitem -recurse “$downloadFolder\content”.  Now that I had my content in a variable called $files I can put this in a foreach loop and use Octavie van Haaften‘s  Upload-FileToWebapp.

During the upload of the files I need to determine if the file from my local disk is a File or Directory.  I used the following classes to determine this:

[System.IO.DirectoryInfo] &  [System.IO.FileInfo]

If the item was a directory then I had to make the upload location match the location on disk.  I did this through a little bit of replacement logic and used the $kudufolder as my variable to use for the upload function from Octavie.


$kudufolder = ((($file.FullName).Replace($uploadfrom,'Content'))`
.replace('\','/')).trimstart('/')
$kudufolder = "$kudufolder/"
Upload-FileToWebApp -resourceGroupName myresourcegroup`
-webAppName mywebapp -kuduPath $kudufolder

The same holds true for the upload of a file. The only difference between the file and the directory is the /. When you are uploading/creating a directory / to kudu means a directory.


$kudufile = ((($file.FullName).Replace($uploadfrom,'Content'))`
.replace('\','/')).trimstart('/')
Upload-FileToWebApp -resourceGroupName myresourcegroup`
-webAppName mywebapp -localPath $file.FullName -kuduPath $kudufile

Here is the full script in the foreach loop with each check for a directory or file.


$downloadfolder = 'c:\temp\myAzureStorage'

$uploadfrom = "$downloadfolder\Content"

$files = get-childitem -Recurse "$downloadfolder\Content"

foreach($file in $files)
{
if($file -is [System.IO.DirectoryInfo])
{
$kudufolder = ((($file.FullName).Replace($uploadfrom,'Content')).replace('\','/')).trimstart('/')
$kudufolder = "$kudufolder/"
Upload-FileToWebApp -resourceGroupName myresourcegroup -webAppName mywebapp -kuduPath $kudufolder
}
elseif($file -is [System.IO.FileInfo])
{
$kudufile = ((($file.FullName).Replace($uploadfrom,'Content')).replace('\','/')).trimstart('/')
Upload-FileToWebApp -resourceGroupName myresourcegroup -webAppName mywebapp -localPath $file.FullName -kuduPath $kudufile
}
}


I hope this helps someone
Until then keep Scripting
Thom