Advanced Log Parser Part 7 - Creating a Generic Input Format Plug-In

In Part 6 of this series, I showed how to create a very basic COM-based input format provider for Log Parser. I wrote that blog post as a follow-up to an earlier blog post where I had written a more complex COM-based input format provider for Log Parser that worked with FTP RSCA events. My original blog post had resulted in several requests for me to write some easier examples about how to get started writing COM-based input format providers for Log Parser, and those appeals led me to write my last blog post:

Advanced Log Parser Part 6 - Creating a Simple Custom Input Format Plug-In

The example in that blog post simply returns static data, which was the easiest example that I could demonstrate.

For this follow-up blog post, I will illustrate how to create a simple COM-based input format plug-in for Log Parser that you can use as a generic provider for consuming data in text-based log files. Please bear in mind that this is just an example to help developers get started writing their own COM-based input format providers; you might be able to accomplish some of what I will demonstrate in this blog post by using the built-in Log Parser functionality. That being said, this still seems like the best example to help developers get started because consuming data in text-based log files was the most-often-requested example that I received.

In Review: Creating COM-based plug-ins for Log Parser

In my earlier blog posts, I mentioned that a COM plug-in has to support several public methods. You can look at those blog posts when you get the chance, but it is a worthwhile endeavor for me to copy the following information from those blog posts since it is essential to understanding how the code sample in this blog post is supposed to work.

Method NameDescription
OpenInput Opens your data source and sets up any initial environment settings.
GetFieldCount Returns the number of fields that your plug-in will provide.
GetFieldName Returns the name of a specified field.
GetFieldType Returns the datatype of a specified field.
GetValue Returns the value of a specified field.
ReadRecord Reads the next record from your data source.
CloseInput Closes your data source and cleans up any environment settings.

Once you have created and registered a COM-based input format plug-in, you call it from Log Parser by using something like the following syntax:

logparser.exe "SELECT * FROM FOO" -i:COM -iProgID:BAR

In the preceding example, FOO is a data source that makes sense to your plug-in, and BAR is the COM class name for your plug-in.

Creating a Generic COM plug-in for Log Parser

As I have done in my previous two blog posts about creating COM-based input format plug-ins, I'm going to demonstrate how to create a COM component by using a scriptlet since no compilation is required. This generic plug-in will parse any text-based log files where records are delimited by CRLF sequences and fields/columns are delimited by a separator that is defined as a constant in the code sample.

To create the sample COM plug-in, copy the following code into a text file, and save that file as "Generic.LogParser.Scriptlet.sct" to your computer. (Note: The *.SCT file extension tells Windows that this is a scriptlet file.)

<SCRIPTLET>
  <registration
    Description="Simple Log Parser Scriptlet"
    Progid="Generic.LogParser.Scriptlet"
    Classid="{4e616d65-6f6e-6d65-6973-526f62657274}"
    Version="1.00"
    Remotable="False" />
  <comment>
  EXAMPLE: logparser "SELECT * FROM 'C:\foo\bar.log'" -i:COM -iProgID:Generic.LogParser.Scriptlet
  </comment>
  <implements id="Automation" type="Automation">
    <method name="OpenInput">
      <parameter name="strFileName"/>
    </method>
    <method name="GetFieldCount" />
    <method name="GetFieldName">
      <parameter name="intFieldIndex"/>
    </method>
    <method name="GetFieldType">
      <parameter name="intFieldIndex"/>
    </method>
    <method name="ReadRecord" />
    <method name="GetValue">
      <parameter name="intFieldIndex"/>
    </method>
    <method name="CloseInput">
      <parameter name="blnAbort"/>
    </method>
  </implements>
  <SCRIPT LANGUAGE="VBScript">

Option Explicit

' Define the column separator in the log file.
Const strSeparator = "|"

' Define whether the first row contains column names.
Const blnHeaderRow = True

' Define the field type constants.
Const TYPE_INTEGER   = 1
Const TYPE_REAL      = 2
Const TYPE_STRING    = 3
Const TYPE_TIMESTAMP = 4
Const TYPE_NULL      = 5

' Declare variables.
Dim objFSO, objFile, blnFileOpen
Dim arrFieldNames, arrFieldTypes
Dim arrCurrentRecord

' Indicate that no file has been opened.
blnFileOpen = False

' --------------------------------------------------------------------------------
' Open the input session.
' --------------------------------------------------------------------------------

Public Function OpenInput(strFileName)
    Dim tmpCount
    ' Test for a file name.
    If Len(strFileName)=0 Then
        ' Return a status that the parameter is incorrect.
        OpenInput = 87
        blnFileOpen = False
    Else
        ' Test for single-quotes.
        If Left(strFileName,1)="'" And Right(strFileName,1)="'" Then
            ' Strip the single-quotes from the file name.
            strFileName = Mid(strFileName,2,Len(strFileName)-2)
        End If
        ' Open the file system object.
        Set objFSO = CreateObject("Scripting.Filesystemobject")
        ' Verify that the specified file exists.
        If objFSO.FileExists(strFileName) Then
            ' Open the specified file.
            Set objFile = objFSO.OpenTextFile(strFileName,1,False)
            ' Set a flag to indicate that the specified file is open.
            blnFileOpen = true
            ' Retrieve an initial record.
            Call ReadRecord()
            ' Redimension the array of field names.
            ReDim arrFieldNames(UBound(arrCurrentRecord))
            ' Loop through the record fields.
            For tmpCount = 0 To (UBound(arrFieldNames))
                ' Test for a header row.
                If blnHeaderRow = True Then
                    arrFieldNames(tmpCount) = arrCurrentRecord(tmpCount)
                Else
                    arrFieldNames(tmpCount) = "Field" & (tmpCount+1)
                End If
            Next
            ' Test for a header row.
            If blnHeaderRow = True Then
                ' Retrieve a second record.
                Call ReadRecord()
            End If
            ' Redimension the array of field types.
            ReDim arrFieldTypes(UBound(arrCurrentRecord))
            ' Loop through the record fields.
            For tmpCount = 0 To (UBound(arrFieldTypes))
                ' Test if the current field contains a date.
                If IsDate(arrCurrentRecord(tmpCount)) Then
                    ' Specify the field type as a timestamp.
                    arrFieldTypes(tmpCount) = TYPE_TIMESTAMP
                ' Test if the current field contains a number.
                ElseIf IsNumeric(arrCurrentRecord(tmpCount)) Then
                    ' Test if the current field contains a decimal.
                    If InStr(arrCurrentRecord(tmpCount),".") Then
                        ' Specify the field type as a real number.
                        arrFieldTypes(tmpCount) = TYPE_REAL
                    Else
                        ' Specify the field type as an integer.
                        arrFieldTypes(tmpCount) = TYPE_INTEGER
                    End If
                ' Test if the current field is null.
                ElseIf IsNull(arrCurrentRecord(tmpCount)) Then
                    ' Specify the field type as NULL.
                    arrFieldTypes(tmpCount) = TYPE_NULL
                ' Test if the current field is empty.
                ElseIf IsEmpty(arrCurrentRecord(tmpCount)) Then
                    ' Specify the field type as NULL.
                    arrFieldTypes(tmpCount) = TYPE_NULL
                ' Otherwise, assume it's a string.
                Else
                    ' Specify the field type as a string.
                    arrFieldTypes(tmpCount) = TYPE_STRING
                End If
            Next
            ' Temporarily close the log file.
            objFile.Close
            ' Re-open the specified file.
            Set objFile = objFSO.OpenTextFile(strFileName,1,False)
            ' Test for a header row.
            If blnHeaderRow = True Then
                ' Skip the first row.
                objFile.SkipLine
            End If
            ' Return success status.
            OpenInput = 0
        Else
            ' Return a file not found status.
            OpenInput = 2
        End If
    End If
End Function

' --------------------------------------------------------------------------------
' Close the input session.
' --------------------------------------------------------------------------------

Public Function CloseInput(blnAbort)
    ' Free the objects.
    Set objFile = Nothing
    Set objFSO = Nothing
    ' Set a flag to indicate that the specified file is closed.
    blnFileOpen = False
End Function

' --------------------------------------------------------------------------------
' Return the count of fields.
' --------------------------------------------------------------------------------

Public Function GetFieldCount()
    ' Specify the default value.
    GetFieldCount = 0
    ' Test if a file is open.
    If (blnFileOpen = True) Then
        ' Test for the number of field names.
        If UBound(arrFieldNames) > 0 Then
            ' Return the count of fields.
            GetFieldCount = UBound(arrFieldNames) + 1
        End If
    End If
End Function

' --------------------------------------------------------------------------------
' Return the specified field's name.
' --------------------------------------------------------------------------------

Public Function GetFieldName(intFieldIndex)
    ' Specify the default value.
    GetFieldName = Null
    ' Test if a file is open.
    If (blnFileOpen = True) Then
        ' Test if the index is valid.
        If intFieldIndex<=UBound(arrFieldNames) Then
            ' Return the specified field name.
            GetFieldName = arrFieldNames(intFieldIndex)
        End If
    End If
End Function

' --------------------------------------------------------------------------------
' Return the specified field's type.
' --------------------------------------------------------------------------------

Public Function GetFieldType(intFieldIndex)
    ' Specify the default value.
    GetFieldType = Null
    ' Test if a file is open.
    If (blnFileOpen = True) Then
        ' Test if the index is valid.
        If intFieldIndex<=UBound(arrFieldTypes) Then
            ' Return the specified field type.
            GetFieldType = arrFieldTypes(intFieldIndex)
        End If
    End If
End Function

' --------------------------------------------------------------------------------
' Return the specified field's value.
' --------------------------------------------------------------------------------

Public Function GetValue(intFieldIndex)
    ' Specify the default value.
    GetValue = Null
    ' Test if a file is open.
    If (blnFileOpen = True) Then
        ' Test if the index is valid.
        If intFieldIndex<=UBound(arrCurrentRecord) Then
            ' Return the specified field value based on the field type.
            Select Case arrFieldTypes(intFieldIndex)
                Case TYPE_INTEGER:
                    GetValue = CInt(arrCurrentRecord(intFieldIndex))
                Case TYPE_REAL:
                    GetValue = CDbl(arrCurrentRecord(intFieldIndex))
                Case TYPE_STRING:
                    GetValue = CStr(arrCurrentRecord(intFieldIndex))
                Case TYPE_TIMESTAMP:
                    GetValue = CDate(arrCurrentRecord(intFieldIndex))
                Case Else
                    GetValue = Null
            End Select
        End If
    End If
End Function
  
' --------------------------------------------------------------------------------
' Read the next record, and return true or false if there is more data.
' --------------------------------------------------------------------------------

Public Function ReadRecord()
    ' Specify the default value.
    ReadRecord = False
    ' Test if a file is open.
    If (blnFileOpen = True) Then
        ' Test if there is more data.
        If objFile.AtEndOfStream Then
            ' Flag the log file as having no more data.
            ReadRecord = False
        Else
            ' Read the current record.
            arrCurrentRecord = Split(objFile.ReadLine,strSeparator)
            ' Flag the log file as having more data to process.
            ReadRecord = True
        End If
    End If
End Function

  </SCRIPT>

</SCRIPTLET>

After you have saved the scriptlet code to your computer, you register it by using the following syntax:

regsvr32 Generic.LogParser.Scriptlet.sct

At the very minimum, you can now use the COM plug-in with Log Parser by using syntax like the following:

logparser "SELECT * FROM 'C:\Foo\Bar.log'" -i:COM -iProgID:Generic.LogParser.Scriptlet

Next, let's analyze what this sample does.

Examining the Generic Scriptlet in Detail

Here are the different parts of the scriptlet and what they do:

  • The <registration> section of the scriptlet sets up the COM registration information; you'll notice the COM component class name and GUID, as well as version information and a general description. (Note that you should generate your own GUID for each scriptlet that you create.)
  • The <implements> section declares the public methods that the COM plug-in has to support.
  • The <script>section contains the actual implementation:
    • The first part of the script section declares the global variables that will be used:
      • The strSeparator  constant defines the delimiter that is used to separate the data between fields/columns in a text-based log file.
      • The blnHeaderRow  constant defines whether the first row in a text-based log file contains the names of the fields/columns:
        • If set to True, the plug-in will use the data in the first line of the log file to name the fields/columns.
        • If set to False, the plug-in will define generic field/column names like "Field1", "Field2", etc.
    • The second part of the script contains the required methods:
      • The OpenInput()  method performs several tasks:
        • Locates and opens the log file that you specify in your SQL statement, or returns an error if the log file cannot be found.
        • Determines the number, names, and data types of fields/columns in the log file.
      • The CloseInput()  method cleans up the session by closing the log file and destroying objects.
      • The GetFieldCount()  method returns the number of fields/columns in the log file.
      • The GetFieldName()  method returns the name of a field/column in the log file.
      • The GetFieldType()  method returns the data type of a field/column in the log file. As a reminder, Log Parser supports the following five data types for COM plug-ins: TYPE_INTEGER, TYPE_REAL, TYPE_STRING, TYPE_TIMESTAMP, and TYPE_NULL.
      • The GetValue()  method returns the data value of a field/column in the log file.
      • The ReadRecord()  method moves to the next line in the log file. This method returns True if there is additional data to read, or False when the end of data is reached.

Next, let's look at how to use the sample.

Using the Generic Scriptlet with Log Parser

As a sample log file for this blog, I'm going to use the data in the Sample XML File (books.xml) from MSDN. By running a quick Log Parser query that I will show later, I was able to export data from the XML file into text file named "books.log" that represents an example of a simple log file format that I have had to work with in the past:

id|publish_date|author|title|price
bk101|2000-10-01|Gambardella, Matthew|XML Developer's Guide|44.950000
bk102|2000-12-16|Ralls, Kim|Midnight Rain|5.950000
bk103|2000-11-17|Corets, Eva|Maeve Ascendant|5.950000
bk104|2001-03-10|Corets, Eva|Oberon's Legacy|5.950000
bk105|2001-09-10|Corets, Eva|The Sundered Grail|5.950000
bk106|2000-09-02|Randall, Cynthia|Lover Birds|4.950000
bk107|2000-11-02|Thurman, Paula|Splish Splash|4.950000
bk108|2000-12-06|Knorr, Stefan|Creepy Crawlies|4.950000
bk109|2000-11-02|Kress, Peter|Paradox Lost|6.950000
bk110|2000-12-09|O'Brien, Tim|Microsoft .NET: The Programming Bible|36.950000
bk111|2000-12-01|O'Brien, Tim|MSXML3: A Comprehensive Guide|36.950000
bk112|2001-04-16|Galos, Mike|Visual Studio 7: A Comprehensive Guide|49.950000

In this example, the data is pretty easy to understand - the first row contains the list of field/column names, and the fields/columns are separated by the pipe ("|") character throughout the log file. That being said, you could easily change my sample code to use a different delimiter that your custom log files use.

With that in mind, let's look at some Log Parser examples.

Example #1: Retrieving Data from a Custom Log

The first thing that you should try is to simply retrieve data from your custom plug-in, and the following query should serve as an example:

logparser "SELECT * FROM 'C:\sample\books.log'" -i:COM -iProgID:Generic.LogParser.Scriptlet

The above query will return results like the following:

idpublish_dateauthortitleprice
-----------------------------------------------------------------------------------------
bk101 10/1/2000 0:00:00 Gambardella, Matthew XML Developer's Guide 44.950000
bk102 12/16/2000 0:00:00 Ralls, Kim Midnight Rain 5.950000
bk103 11/17/2000 0:00:00 Corets, Eva Maeve Ascendant 5.950000
bk104 3/10/2001 0:00:00 Corets, Eva Oberon's Legacy 5.950000
bk105 9/10/2001 0:00:00 Corets, Eva The Sundered Grail 5.950000
bk106 9/2/2000 0:00:00 Randall, Cynthia Lover Birds 4.950000
bk107 11/2/2000 0:00:00 Thurman, Paula Splish Splash 4.950000
bk108 12/6/2000 0:00:00 Knorr, Stefan Creepy Crawlies 4.950000
bk109 11/2/2000 0:00:00 Kress, Peter Paradox Lost 6.950000
bk110 12/9/2000 0:00:00 O'Brien, Tim Microsoft .NET: The Programming Bible 36.950000
bk111 12/1/2000 0:00:00 O'Brien, Tim MSXML3: A Comprehensive Guide 36.950000
bk112 4/16/2001 0:00:00 Galos, Mike Visual Studio 7: A Comprehensive Guide 49.950000
         
Statistics:  
-----------  
Elements processed: 12
Elements output: 12
Execution time: 0.16 seconds

While the above example works a good proof-of-concept for functionality, it's not overly useful, so let's look at additional examples.

Example #2: Reformatting Log File Data

Once you have established that you can retrieve data from your custom plug-in, you can start taking advantage of Log Parser's features to process your log file data. In this example, I will use several of the built-in functions to reformat the data:

logparser "SELECT id AS ID, TO_DATE(publish_date) AS Date, author AS Author, SUBSTR(title,0,20) AS Title, STRCAT(TO_STRING(TO_INT(FLOOR(price))),SUBSTR(TO_STRING(price),INDEX_OF(TO_STRING(price),'.'),3)) AS Price FROM 'C:\sample\books.log'" -i:COM -iProgID:Generic.LogParser.Scriptlet

The above query will return results like the following:

IDDateAuthorTitlePrice
------------------------------------------------------------
bk101 10/1/2000 Gambardella, Matthew XML Developer's Guid 44.95
bk102 12/16/2000 Ralls, Kim Midnight Rain 5.95
bk103 11/17/2000 Corets, Eva Maeve Ascendant 5.95
bk104 3/10/2001 Corets, Eva Oberon's Legacy 5.95
bk105 9/10/2001 Corets, Eva The Sundered Grail 5.95
bk106 9/2/2000 Randall, Cynthia Lover Birds 4.95
bk107 11/2/2000 Thurman, Paula Splish Splash 4.95
bk108 12/6/2000 Knorr, Stefan Creepy Crawlies 4.95
bk109 11/2/2000 Kress, Peter Paradox Lost 6.95
bk110 12/9/2000 O'Brien, Tim Microsoft .NET: The 36.95
bk111 12/1/2000 O'Brien, Tim MSXML3: A Comprehens 36.95
bk112 4/16/2001 Galos, Mike Visual Studio 7: A C 49.95
         
Statistics:  
-----------  
Elements processed: 12
Elements output: 12
Execution time: 0.02 seconds

This example reformats the dates and prices a little nicer, and it truncates the book titles at 20 characters so they fit a little better on some screens.

Example #3: Processing Log File Data

In addition to simply reformatting your data, you can use Log Parser to group, sort, count, total, etc., your data. The following example illustrates how to use Log Parser to count the number of books by author in the log file:

logparser "SELECT author AS Author, COUNT(Title) AS Books FROM 'C:\sample\books.log' GROUP BY Author ORDER BY Author" -i:COM -iProgID:Generic.LogParser.Scriptlet

The above query will return results like the following:

AuthorBooks
-------------------------
Corets, Eva 3
Galos, Mike 1
Gambardella, Matthew 1
Knorr, Stefan 1
Kress, Peter 1
O'Brien, Tim 2
Ralls, Kim 1
Randall, Cynthia 1
Thurman, Paula 1
   
Statistics:  
-----------  
Elements processed: 12
Elements output: 9
Execution time: 0.03 seconds

The results are pretty straight-forward: Log Parser parses the data and presents you with a list of alphabetized authors and the total number of books that were written by each author.

Example #4: Creating Charts

You can also use data from your custom log file to create charts through Log Parser. If I modify the above example, all that I need to do is add a few parameters to create a chart:

logparser "SELECT author AS Author, COUNT(Title) AS Books INTO Authors.gif FROM 'C:\sample\books.log' GROUP BY Author ORDER BY Author" -i:COM -iProgID:Generic.LogParser.Scriptlet -fileType:GIF -groupSize:800x600 -chartType:Pie -categories:OFF -values:ON -legend:ON

The above query will create a chart like the following:

I admit that it's not a very pretty-looking chart - you can look at the other posts in my Log Parser series for some examples about making Log Parser charts more interesting.

Summary

In this blog post and my last post, I have illustrated a few examples that should help developers get started writing their own custom input format plug-ins for Log Parser. As I mentioned in each of the blog posts where I have used scriptlets for the COM objects, I would typically use C# or C++ to create a COM component, but using a scriptlet is much easier for demos because it doesn't require installing Visual Studio and compiling a DLL.

There is one last thing that I would like to mention before I finish this blog; I mentioned earlier that I had used Log Parser to reformat the sample Books.xml file into a generic log file that I could use for the examples in this blog. Since Log Parser supports XML as an input format and it allows you to customize your output, I wrote the following simple Log Parser query to reformat the XML data into a format that I had often seen used for text-based log files:

logparser.exe "SELECT id,publish_date,author,title,price INTO books.log FROM books.xml" -i:xml -o:tsv -headers:ON -oSeparator:"|"

Actually, this ability to change data formats is one of the hidden gems of Log Parser; I have often used Log Parser to change the data from one type of log file to another - usually so that a different program can access the data. For example, if you were given the log file with a pipe ("|") delimiter like I used as an example, you could easily use Log Parser to convert that data into the CSV format so you could open it in Excel:

logparser.exe "SELECT id,publish_date,author,title,price INTO books.csv FROM books.log" -i:tsv -o:csv -headers:ON -iSeparator:"|" -oDQuotes:on

I hope these past few blog posts help you to get started writing your own custom input format plug-ins for Log Parser.

That's all for now. ;-)


Note: This blog was originally posted at http://blogs.msdn.com/robert_mcmurray/

Advanced Log Parser Part 6 - Creating a Simple Custom Input Format Plug-In

In Part 4 of this series, I illustrated how to create a new COM-based input provider for Log Parser from a custom input format:

Advanced Log Parser Charts Part 4 - Adding Custom Input Formats

For the sample that I published in that blog, I wrote a plug-in that consumed FTP RSCA events, which is highly structured data, and it added a lot of complexity to my example. In the past ten months or so since I published my original blog, I've had several requests for additional information about how to get started writing COM-based input formats for Log Parser, so it occurred to me that perhaps I could have shown a simpler example to get people started instead of diving straight into parsing RSCA data. ;-)

With that in mind, I thought that I would write a couple of blog posts with simpler examples to help anyone who wants to get started writing custom input formats for Log Parser.

For this blog post, I will show you how to create a very basic COM-based input format provider for Log Parser that simply returns static data; you could use this sample as a template to quickly get up-and-running with the basic concepts. (I promise to follow this blog with another real-world example that is still easier-to-use than my RSCA example.)

A Reminder about Creating COM-based plug-ins for Log Parser

In the blog that I referred to earlier, I mentioned that a COM plug-in has to support the following public methods:

Method NameDescription
OpenInput Opens your data source and sets up any initial environment settings.
GetFieldCount Returns the number of fields that your plug-in will provide.
GetFieldName Returns the name of a specified field.
GetFieldType Returns the datatype of a specified field.
GetValue Returns the value of a specified field.
ReadRecord Reads the next record from your data source.
CloseInput Closes your data source and cleans up any environment settings.

Once you have created and registered a COM plug-in, you call it by using something like the following syntax:

logparser.exe "SELECT * FROM FOO" -i:COM -iProgID:BAR

In the preceding example, FOO is a data source that makes sense to your plug-in, and BAR is the COM class name for your plug-in.

Creating a Simple COM plug-in for Log Parser

Once again, I'm going to demonstrate how to create a COM component by using a scriptlet, which I like to use for demos because they are quick to design, they're easily portable, and updates take place immediately since no compilation is required. (All of that being said, if I were writing a real COM plug-in for Log Parser, I would use C# or C++.)

To create the sample COM plug-in, copy the following code into a text file, and save that file as "Simple.LogParser.Scriptlet.sct" to your computer. (Note: The *.SCT file extension tells Windows that this is a scriptlet file.)

<SCRIPTLET>
  <registration
    Description="Simple Log Parser Scriptlet"
    Progid="Simple.LogParser.Scriptlet"
    Classid="{4e616d65-6f6e-6d65-6973-526f62657274}"
    Version="1.00"
    Remotable="False" />
  <comment>
  EXAMPLE: logparser "SELECT * FROM FOOBAR" -i:COM -iProgID:Simple.LogParser.Scriptlet
  </comment>
  <implements id="Automation" type="Automation">
    <method name="OpenInput">
      <parameter name="strValue"/>
    </method>
    <method name="GetFieldCount" />
    <method name="GetFieldName">
      <parameter name="intFieldIndex"/>
    </method>
    <method name="GetFieldType">
      <parameter name="intFieldIndex"/>
    </method>
    <method name="ReadRecord" />
    <method name="GetValue">
      <parameter name="intFieldIndex"/>
    </method>
    <method name="CloseInput">
      <parameter name="blnAbort"/>
    </method>
  </implements>
  <SCRIPT LANGUAGE="VBScript">

Option Explicit

Const MAX_RECORDS = 5
Dim intRecordCount

' --------------------------------------------------------------------------------
' Open the input session.
' --------------------------------------------------------------------------------

Public Function OpenInput(strValue)
    intRecordCount = 0
End Function

' --------------------------------------------------------------------------------
' Close the input session.
' --------------------------------------------------------------------------------

Public Function CloseInput(blnAbort)
End Function

' --------------------------------------------------------------------------------
' Return the count of fields.
' --------------------------------------------------------------------------------

Public Function GetFieldCount()
    GetFieldCount = 5
End Function

' --------------------------------------------------------------------------------
' Return the specified field's name.
' --------------------------------------------------------------------------------

Public Function GetFieldName(intFieldIndex)
    Select Case CInt(intFieldIndex)
        Case 0:
            GetFieldName = "INTEGER"
        Case 1:
            GetFieldName = "REAL"
        Case 2:
            GetFieldName = "STRING"
        Case 3:
            GetFieldName = "TIMESTAMP"
        Case 4:
            GetFieldName = "NULL"
        Case Else
            GetFieldName = Null
    End Select
End Function

' --------------------------------------------------------------------------------
' Return the specified field's type.
' --------------------------------------------------------------------------------

Public Function GetFieldType(intFieldIndex)
    ' Define the field type constants.
    Const TYPE_INTEGER   = 1
    Const TYPE_REAL      = 2
    Const TYPE_STRING    = 3
    Const TYPE_TIMESTAMP = 4
    Const TYPE_NULL      = 5
    Select Case CInt(intFieldIndex)
        Case 0:
            GetFieldType = TYPE_INTEGER
        Case 1:
            GetFieldType = TYPE_REAL
        Case 2:
            GetFieldType = TYPE_STRING
        Case 3:
            GetFieldType = TYPE_TIMESTAMP
        Case 4:
            GetFieldType = TYPE_NULL
        Case Else
            GetFieldType = Null
    End Select
End Function

' --------------------------------------------------------------------------------
' Return the specified field's value.
' --------------------------------------------------------------------------------

Public Function GetValue(intFieldIndex)
    Select Case CInt(intFieldIndex)
        Case 0:
            GetValue = 1
        Case 1:
            GetValue = 1.0
        Case 2:
            GetValue = "One"
        Case 3:
            GetValue = Now
        Case Else
            GetValue = Null
    End Select
End Function
  
' --------------------------------------------------------------------------------
' Read the next record, and return true or false if there is more data.
' --------------------------------------------------------------------------------

Public Function ReadRecord()
    intRecordCount = intRecordCount + 1
    If intRecordCount <= MAX_RECORDS Then
        ReadRecord = True
    Else
        ReadRecord = False
    End If
End Function

  </SCRIPT>

</SCRIPTLET>

After you have saved the scriptlet code to your computer, you register it by using the following syntax:

regsvr32 Simple.LogParser.Scriptlet.sct

At the very minimum, you can now use the COM plug-in with Log Parser by using syntax like the following:

logparser "SELECT * FROM FOOBAR" -i:COM -iProgID:Simple.LogParser.Scriptlet

This will return results like the following:

INTEGERREALSTRINGTIMESTAMPNULL
-------------------------------------------
1 1.000000 One 2/26/2013 19:42:12 -
1 1.000000 One 2/26/2013 19:42:12 -
1 1.000000 One 2/26/2013 19:42:12 -
1 1.000000 One 2/26/2013 19:42:12 -
1 1.000000 One 2/26/2013 19:42:12 -
         
Statistics:        
-----------        
Elements processed: 5      
Elements output: 5      
Execution time: 0.01 seconds      

Next, let's analyze what this sample does.

Examining the Sample Scriptlet Contents in Detail

Here are the different parts of the scriptlet and what they do:

  • The <registration> section of the scriptlet sets up the COM registration information; you'll notice the COM component class name and GUID, as well as version information and a general description. (Note that you should generate your own GUID for each scriptlet that you create.)
  • The <implements> section declares the public methods that the COM plug-in has to support.
  • The <script>section contains the actual implementation:
    • The OpenInput() method opens your data source, although in this example it only initializes the record count. (Note that the value that is passed to the method will be ignored in this example.)
    • The CloseInput() method would normally clean up your session, (e.g. close a data file or database, etc.), but it doesn't do anything in this example.
    • The GetFieldCount() method returns the number of data fields in each record of your data, which is static in this example.
    • The GetFieldName() method returns the name of a field that is passed to the method as a number; the names are static in this example.
    • The GetFieldType() method returns the data type of a field that is passed to the method as a number, which are statically-defined in this example. As a reminder, Log Parser supports the following five data types for COM plug-ins: TYPE_INTEGER, TYPE_REAL, TYPE_STRING, TYPE_TIMESTAMP, and TYPE_NULL.
    • The GetValue() method returns the data value of a field that is passed to the method as a number. Once again, the data values are statically-defined in this example.
    • The ReadRecord() method moves to the next record in your data set; this method returns True if there is data to read, or False when the end of data is reached. In this example, the method increments the record counter and sets the status based on whether the maximum number of records has been reached.

Summary

That wraps up the simplest example that I could put together of a COM-based input provider for Log Parser. In my next blog, I'll show how to create a generic COM-based input provider for Log Parser that you can use to parse text-based log files.


Note: This blog was originally posted at http://blogs.msdn.com/robert_mcmurray/

Replacing the Windows 8 Start Menu

As most people who have installed Windows 8 have realized by now, this new version of Windows is missing something... something very important: a real Start Menu. In their efforts to make Windows more tablet-friendly, the people in charge of the Windows 8 design decided to abandon the user interface which revolutionized the desktop experience upon its inclusion with Windows 95, NT4, 98, ME, 2000, XP, 2003, Vista, 2008, and Windows 7, and have opted for the following layout:

Windows 8 Start Menu

This design was so clunky and so confusing for users that it resulted in the following actual advertisement outside a local computer repair shop:

Removing Windows 8 and Reinstalling Windows 7

The Windows 8 user experience was so bad that none of the older members of my family were able to use it, so I set out to find a replacement for the missing start menu; something which would make Windows 8 look and feel like using Windows 7. With that in mind, I tried out several Windows 8 Start Menu applications with mixed results. I did all of my testing on a desktop version of Windows 8, but all of these will work on the Microsoft Surface Pro tablet, although they will not on the original ARM-based Microsoft Surface tablet. (See my notes below about that.)

All that being said, here are some of the better Start Menu replacements that I tested:

Start8:

  • URL: http://www.stardock.com/products/start8/
  • Pricing: $4.99
  • Rating: GREAT
  • Feedback: I really liked this start menu; it worked well and it had lots of options - not as many options as some of the menus for which I only gave a GOOD rating, but it was still pretty darn cool. Once you install this start menu system and have it boot into desktop mode, Windows 8 is almost exactly like using Windows 7. (Note that you can buy a license for this application that is bundled together with their ModernMix application which allows you to run Windows Store applications in a window.)

Classic Shell:

  • URL: http://www.classicshell.net/
  • Pricing: FREE
  • Rating: GOOD
  • Feedback: This start menu has lots of configurable options so it's very customizable, but its "Windows 7" start menu is basically the same as its Windows XP start menu with a Windows 7 theme, whereas Start8's Windows 7 start menu is the actual menu style that you expect. That said, since it's open-source you could modify it yourself. ;-)

Start Menu X aka Start Button 8:

  • URL: http://www.startmenux.com/ or http://www.startbutton8.com/
  • Pricing: FREE, although there is a pro version for $19.99
  • Rating: GOOD
  • Feedback: This start menu has a smattering of options, and it is definitely its own beast in terms of what you get for a start menu. But that being said, it does give you a start menu, just not one that you are used to or expecting.

Classic Start 8:

  • URL: http://www.classicstart8.com/
  • Pricing: FREE
  • Rating: ACCEPTABLE
  • Feedback: This start menu has no configurable options, so it cannot be customized. But that being said, its start menu is basically the same as a "Windows 7" start menu. Still, if you need a great freeware approach to getting the start menu back, you can't beat this.
  • UPDATE: This start menu also adds some spamware links to the start menu, so I'm not a big fan of this offering.

RetroUI:

  • URL: http://retroui.com/
  • Pricing: Starts at $4.95 for 1 Consumer Activation and goes up from there
  • Rating: TERRIBLE
  • Feedback: I did not like this start menu at all - it was cumbersome and confusing and it looked awful. (They were trying to go with a Metro-styled start menu, and it just didn't work).

By the way, I wrote two companies that make Start Menus for Windows 8, and neither will make their product available for Windows 8 RT; they say that the sandboxing features in Windows RT prevent a start menu replacement from working properly. So if you have an original Microsoft Surface RT tablet, not the Microsoft Surface Pro, you're out of luck. :-(


FWIW - here are some URLs that I looked at with discussions about this topic:

FTP Clients - Part 12: BitKinex

For this installment in my series about FTP clients, I want to take a look at BitKinex 3, which is an FTP client from Barad-Dur, LLC. For this blog I used BitKinex 3.2.3, and it is available from the following URL:

http://www.bitkinex.com/

At the time of this blog post, BitKinex 3 is available for free, and it contains a bunch of features that make it an appealing FTP and WebDAV client.

Fig. 1 - The Help/About dialog in BitKinex 3.

BitKinex 3 Overview

When you open BitKinex 3, it shows four connection types (which it refers to as Data Sources): FTP, HTTP/WebDAV, SFTP/SSH, and My Computer. The main interface is analogous to what you would expect in a Site Manager with other FTP clients - you can define new data sources (connections) to FTP sites and websites:

Fig. 2 - The main BitKinex 3 window.

Creating an FTP data source is pretty straight-forward, and there are a fair number of options that you can specify. What's more, data sources can have individual options specified, or they can inherit from a parent note.

Fig. 3 - Creating a new FTP data source.
Fig. 4 - Specifying the options for an FTP data source.

Once a data source has connected, a child window will open and display the folder trees for your local and remote content. (Note: there are several options for customizing how each data source can be displayed.)

Fig. 5 - An open FTP data source.

BitKinex 3 has support for command-line automation, which is pretty handy if you do a lot of scripting like I do. Documentation about automating BitKinex 3 from the command line is available on the BitKinex website at the following URL:

BitKinex Command Line Interface

That being said, the documentation is a bit sparse and there are few examples, so I didn't attempt anything ambitious from a command line during my testing.

Using BitKinex 3 with FTP over SSL (FTPS)

BitKinex 3 has built-in support for FTP over SSL (FTPS) supports both Explicit and Implicit FTPS. To specify the FTPS mode, you need to choose the correct mode from the Security drop-down menu for your FTP data source.

Fig. 6 - Specifying the FTPS mode.

Once you have established an FTPS connection through BitKinex 3, the user experience is the same as it is for a standard FTP connection.

Using Using BitKinex 3 with True FTP Hosts

True FTP hosts are not supported natively, and even though BitKinex 3 allows you to send a custom command after a data source has been opened, I could not find a way to send a custom command before sending user credentials, so true FTP hosts cannot be used.

Using Using BitKinex 3 with Virtual FTP Hosts

BitKinex 3's login settings allow you to specify the virtual host name as part of the user credentials by using syntax like "ftp.example.com|username" or "ftp.example.com\username", so you can use virtual FTP hosts with BitKinex 3.

Fig. 7 - Specifying an FTP virtual host.

Scorecard for BitKinex 3

This concludes my quick look at a few of the FTP features that are available with BitKinex 3, and here are the scorecard results:

Client
Name
Directory
Browsing
Explicit
FTPS
Implicit
FTPS
Virtual
Hosts
True
HOSTs
Site
Manager
Extensibility
BitKinex 3.2.3 Rich Y Y Y N Y N/A
Note: I could not find anyway to extend the functionality of BitKinex 3; but as I
mentioned earlier, it does support command-line automation.

That wraps it up this blog - BitKinex 3 is pretty cool FTP client with a lot of options, and I think that my next plan of action is to try out the WebDAV features that are available in BitKinex 3. ;-)


Note: This blog was originally posted at http://blogs.msdn.com/robert_mcmurray/

Restarting the FTP Service Orphans a DLLHOST.EXE Process

I was recently creating a new authentication provider using FTP extensibility, and I ran into a weird behavior that I had seen before. With that in mind, I thought my situation would make a great blog subject because someone else may run into it.

Here are the details of the situation: let's say that you are developing a new FTP provider for IIS, and your code changes never seem to take effect. Your provider appears to be working, it's just that any new functionality is not reflected in your provider's behavior. You restart the FTP service as a troubleshooting step, but that does not appear to make any difference.

I'll bypass mentioning any other troubleshooting tasks and cut to the chase - if you read my Changing the Identity of the FTP 7 Extensibility Process blog post a year ago, you will recall that I mentioned that all custom FTP extensibility providers are executed through COM+ in a DLLHOST.exe process. When you restart the FTP service, that should clean up the DLLHOST.EXE process that is being used for FTP extensibility. However, if you are developing custom FTP providers and the DLLHOST.EXE process is not terminated by the FTP service, you may find yourself in a situation where you have a DLLHOST.EXE process in memory that contains an older copy of your provider, which will not be removed from memory until the DLLHOST.EXE process for FTP extensibility has been forcibly terminated.

If you have read some of my earlier blog posts or walkthroughs on IIS.NET, you may have noticed that I generally like to use a few pre-build and post-build commands in my FTP projects; usually I add these commands in order to to automatically register/unregister my FTP providers in the Global Assembly Cache (GAC).

With a little modification and some command-line wizardry, you can automate the termination of any orphaned DLLHOST.EXE processes that are being used for FTP extensibility. With that in mind, here are some example pre-build/post-build commands that will unregister/reregister your provider in the GAC, restart the FTP service, and terminate any orphaned FTP extensibility DLLHOST.EXE processes.

Note: The following syntax was written using Visual Studio 2010; you would need to change "%VS100COMNTOOLS%" to "%VS90COMNTOOLS%" for Visual Studio 2008 or "%VS110COMNTOOLS%" for Visual Studio 2012.

Pre-build Commands:

net stop ftpsvc

call "%VS100COMNTOOLS%\vsvars32.bat">nul

cd /d "$(TargetDir)"

gacutil.exe /uf "$(TargetName)"

for /f "usebackq tokens=1,2* delims=," %%a in (`tasklist /fi "MODULES eq Microsoft.Web.FtpServer.*" /fi "IMAGENAME eq DLLHOST.EXE" /fo csv ^| find /i "dllhost.exe"`) do taskkill /f /pid %%b

Post-build Commands:

call "%VS100COMNTOOLS%\vsvars32.bat">nul

gacutil.exe /if "$(TargetPath)"

net start ftpsvc

The syntax is a little tricky for the FOR statement, so be carefully when typing or copying/pasting that into your projects. For example, you need to make sure that all of the code from the FOR statement through the TASKKILL command are on the same line in your project's properties.

When you compile your provider, Visual Studio should display something like the following:

------ Rebuild All started: Project: FtpBlogEngineNetAuthentication, Configuration: Release Any CPU ------
The Microsoft FTP Service service is stopping.
The Microsoft FTP Service service was stopped successfully.

Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1
Copyright (c) Microsoft Corporation. All rights reserved.

Assembly: FtpBlogEngineNetAuthentication, Version=1.0.0.0, Culture=neutral, PublicKeyToken=426f62526f636b73, processorArchitecture=MSIL
Uninstalled: FtpBlogEngineNetAuthentication, Version=1.0.0.0, Culture=neutral, PublicKeyToken=426f62526f636b73, processorArchitecture=MSIL
Number of assemblies uninstalled = 1
Number of failures = 0
SUCCESS: The process with PID 12656 has been terminated.
FtpBlogEngineNetAuthentication -> C:\Users\dude\Documents\Visual Studio 2010\Projects\FtpBlogEngineNetAuthentication\FtpBlogEngineNetAuthentication\bin\Release\FtpBlogEngineNetAuthentication.dll
Microsoft (R) .NET Global Assembly Cache Utility. Version 4.0.30319.1
Copyright (c) Microsoft Corporation. All rights reserved.

Assembly successfully added to the cache
The Microsoft FTP Service service is starting.
The Microsoft FTP Service service was started successfully.

========== Rebuild All: 1 succeeded, 0 failed, 0 skipped ==========

If you analyze the output from the build process, you will see that the commands in my earlier samples stopped the FTP service, removed the existing assembly from the GAC, terminated any orphaned DLLHOST.EXE processes, registered the newly-built DLL in the GAC, and then restarted the FTP service.

By utilizing these pre-build/post-build commands, I have been able to work around situations where a DLLHOST.EXE process is being orphaned and caching old assemblies in memory.


Note: This blog was originally posted at http://blogs.msdn.com/robert_mcmurray/

The Road Less Travelled

Perhaps it's because the media is going through yet another season of what seems like a never-ending parade of Hollywood awards programs, but I was thinking the other day about all of the awards that I will never win. For example, I will never win a Golden Globe. I will never win a People's Choice Award. I will never win an Oscar, or a Tony, or an Emmy, or any award that is named after some person who might not be real. And despite a lifetime of playing music, I will never win a Grammy or any other award that the music industry is giving out these days. This may be my reality, but to be perfectly honest, I am never saddened by this, nor do I generally give this concept a second thought.

That being said, the most-recent awards show made me think about the reasons why we even care about those kinds of awards. I can't name who won Best Actor or Actress from any of the awards shows that have taken place in the last several years, and that's really not an issue for me; I'll never meet any of the people who win those awards anyway. What's more, I'm not sure if I would want to meet most of the people who actually win those awards, seeing as how the evening news and morning talk shows are always spinning stories of their latest transgressions. I think the part that gets me the most is how - after throwing their lives away on one selfish pursuit after another - the world eventually calls them "artists," and everyone waxes poetic about how these artists have suffered for their cause; as if they woke up one day and consciously chose to take the road less travelled in Robert Frost's famous poem. When I was younger, I think I bought into that illusion, too. But the older I get, the less I am impressed by their actions - and perhaps I should explain what I mean by that.

If a man whom you knew personally walked out on his wife and family, in most cases you would probably think he was acting like a selfish pig. But if it was a famous actor from Hollywood or a legendary singer from Nashville, you might think to yourself, "Gee, that's too bad...," as if their fame has excused their adverse behavior for some inexplicable reason. You might even go so far as to feel sorry for said person; after all, it's just so sad that their family doesn't understand how hard an artist's life must be.

But why do we feel this way? Why do we put these people on some sort of undeserved pedestal? Is it because they're artists? The more I think about it, I don't believe that they've chosen the road less travelled - I think they've chosen the easy path; they've chosen the path that's all about them. Perhaps that's why they need so many awards shows; they need the constant reassurance that all of the suffering they cause is for a noble purpose. But I just can't bring myself to see it that way.

Let me briefly tell you a true story about my life, and this is difficult for me because it is always dangerous when you open up your life to public scrutiny; you never know what people are going to think. When I was much younger, I faced one of those situations where it seemed like two roads were diverging before me and I had to pick which path I would travel.

I had just celebrated my 19th birthday, and my rock band was starting to do really well. We weren't great by any means, but we were just coming off a series of really great gigs when my fiancé told me that she was pregnant with our child. I had a lot of options before me: we could get married, we could put the baby up for adoption, etc. (My girlfriend had additional concerns: what if I suddenly became some sort of jerk and told her that it was her problem and left her to face this on her own.) Once the news began to work its way through the grapevine to all our friends and family, I heard a lot of advice from a lot well-meaning people - all of whom listed off suggestions that were much like the choices that I just mentioned.

But I didn't take anyone's advice. Instead, against everyone else's counsel, I married my girlfriend. We had a baby girl, who is now almost ten years older than I was when I made my choice to keep her. But this decision on my part didn't come without cost; my days of playing long-haired lead guitar for a rock band were over. In fact, my entire youth ended almost overnight - it was time to put aside my personal ambitions and accept the responsibilities that lay before me. My wife and I spent many years in abject poverty as we fought side-by-side to build a home together and raise our children as best we could. Despite the difficult times, my wife and I recently celebrated our 28th anniversary, and we raised three great kids along the way.

However, my life might not have been this way; I could have chosen the other path when I was given the opportunity to do so. I could have chosen something selfish that I wanted just for me, and I could have left my girlfriend to deal with it on her own. Some years later, I could have written a heart-wrenching song about the hard choices that I had to make. Perhaps that could have become a hit, and I could have sold that song to untold scores of fans. Maybe I could have written a book about my life and my admirers might have said, "That's so sad - look at everything he gave up to become who he is."

Every year people walk out on their responsibilities in the hopes that the scenario which I just described will happen to them; they hope they'll be successful despite the pain that they cause to others. What is worse, however, is that popular culture applauds such actions. Songs like Bruce Springsteen's Hungry Heart attempt to spin public opinion in support of egocentric behavior by unapologetically suggesting that a deadbeat dad was simply "following his heart."

Yet in my personal situation this delusion would have been far from the truth; I would have been a selfish punk who left his unwed 18-year-old girlfriend to face the world alone with a newborn baby girl. Perhaps I might have become a successful 'artist' and sent generous child support payments to take care of my daughter's every need, but that's just not the same. Children need parents; they need both a father and a mother to be there to love and raise them.

There is no way that I can say this so it won't sound overly-judgmental, but I think it makes someone a coward when they choose their own selfish desires over their family and their responsibilities. When I chose to become a father, I gave up everything that I wanted for myself; I gave up my personal hopes, dreams, and desires for my life. I sacrificed everything so my daughter would grow up with both a mom and dad. My choice was much harder to live with than I ever could have imagined, but my daughter's life was worth the cost.

So in the end, when I finally shrug off this mortal coil, I will not have won any awards for what I have accomplished in my life, and I'll have no golden statuettes to adorn the shelves in my study. I am sure that I will never win father of the year, but my three children will have had better lives because I chose to be their father. I did not choose the easy path for my life - I chose the road less travelled, and I pray that for my family it has made all the difference.

Using Classic ASP and URL Rewrite for Dynamic SEO Functionality

I had another interesting situation present itself recently that I thought would make a good blog: how to use Classic ASP with the IIS URL Rewrite module to dynamically generate Robots.txt and Sitemap.xml files.

Overview

Here's the situation: I host a website for one of my family members, and like everyone else on the Internet, he wanted some better SEO rankings. We discussed a few things that he could do to improve his visibility with search engines, and one of the suggestions that I gave him was to keep his Robots.txt and Sitemap.xml files up-to-date. But there was an additional caveat - he uses two separate DNS names for the same website, and that presents a problem for absolute URLs in either of those files. Before anyone points out that it's usually not a good idea to host multiple DNS names on the same content, there are times when this is acceptable; for example, if you are trying to decide which of several DNS names is the best to use, you might want to bind each name to the same IP address and parse your logs to find out which address is getting the most traffic.

In any event, the syntax for both Robots.txt and Sitemap.xml files is pretty easy, so I wrote a couple of simple Classic ASP Robots.asp and Sitemap.asp pages that output the correct syntax and DNS-specific URLs for each domain name, and I wrote some simple URL Rewrite rules that rewrite inbound requests for Robots.txt and Sitemap.xml files to the ASP pages, while blocking direct access to the Classic ASP pages themselves.

All of that being said, there are a couple of quick things that I would like to mention before I get to the code:

  • First of all, I chose Classic ASP for the files because it allows the code to run without having to load any additional framework; I could have used ASP.NET or PHP just as easily, but either of those would require additional overhead that isn't really required.
  • Second, the specific website for which I wrote these specific examples consists of all static content that is updated a few times a month, so I wrote the example to parse the physical directory structure for the website's URLs and specified a weekly interval for search engines to revisit the website. All of these options can easily be changed; for example, I reused this code a little while later for a website where all of the content was created dynamically from a database, and I updated the code in the Sitemap.asp file to create the URLs from the dynamically-generated content. (That's really easy to do, but outside the scope of this blog.)

That being said, let's move on to the actual code.

Creating the Required Files

There are three files that you will need to create for this example:

  1. A Robots.asp file to which URL Rewrite will send requests for Robots.txt
  2. A Sitemap.asp file to which URL Rewrite will send requests for Sitemap.xml
  3. A Web.config file that contains the URL Rewrite rules

Step 1 - Creating the Robots.asp File

You need to save the following code sample as Robots.asp in the root of your website; this page will be executed whenever someone requests the Robots.txt file for your website. This example is very simple: it checks for the requested hostname and uses that to dynamically create the absolute URL for the website's Sitemap.xml file.

<%
    Option Explicit
    On Error Resume Next
    
    Dim strUrlRoot
    Dim strHttpHost
    Dim strUserAgent

    Response.Clear
    Response.Buffer = True
    Response.ContentType = "text/plain"
    Response.CacheControl = "public"

    Response.Write "# Robots.txt" & vbCrLf
    Response.Write "# For more information on this file see:" & vbCrLf
    Response.Write "# http://www.robotstxt.org/" & vbCrLf & vbCrLf

    strHttpHost = LCase(Request.ServerVariables("HTTP_HOST"))
    strUserAgent = LCase(Request.ServerVariables("HTTP_USER_AGENT"))
    strUrlRoot = "http://" & strHttpHost

    Response.Write "# Define the sitemap path" & vbCrLf
    Response.Write "Sitemap: " & strUrlRoot & "/sitemap.xml" & vbCrLf & vbCrLf

    Response.Write "# Make changes for all web spiders" & vbCrLf
    Response.Write "User-agent: *" & vbCrLf
    Response.Write "Allow: /" & vbCrLf
    Response.Write "Disallow: " & vbCrLf
    Response.End
%>

Step 2 - Creating the Sitemap.asp File

The following example file is also pretty simple, and you would save this code as Sitemap.asp in the root of your website. There is a section in the code where it loops through the file system looking for files with the *.html file extension and only creates URLs for those files. If you want other files included in your results, or you want to change the code from static to dynamic content, this is where you would need to update the file accordingly.

<%
    Option Explicit
    On Error Resume Next
    
    Response.Clear
    Response.Buffer = True
    Response.AddHeader "Connection", "Keep-Alive"
    Response.CacheControl = "public"
    
    Dim strFolderArray, lngFolderArray
    Dim strUrlRoot, strPhysicalRoot, strFormat
    Dim strUrlRelative, strExt

    Dim objFSO, objFolder, objFile

    strPhysicalRoot = Server.MapPath("/")
    Set objFSO = Server.CreateObject("Scripting.Filesystemobject")
    
    strUrlRoot = "http://" & Request.ServerVariables("HTTP_HOST")
    
    ' Check for XML or TXT format.
    If UCase(Trim(Request("format")))="XML" Then
        strFormat = "XML"
        Response.ContentType = "text/xml"
    Else
        strFormat = "TXT"
        Response.ContentType = "text/plain"
    End If

    ' Add the UTF-8 Byte Order Mark.
    Response.Write Chr(CByte("&hEF"))
    Response.Write Chr(CByte("&hBB"))
    Response.Write Chr(CByte("&hBF"))
    
    If strFormat = "XML" Then
        Response.Write "<?xml version=""1.0"" encoding=""UTF-8""?>" & vbCrLf
        Response.Write "<urlset xmlns=""http://www.sitemaps.org/schemas/sitemap/0.9"">" & vbCrLf
    End if
    
    ' Always output the root of the website.
    Call WriteUrl(strUrlRoot,Now,"weekly",strFormat)

    ' --------------------------------------------------
    ' This following section contains the logic to parse
    ' the directory tree and return URLs based on the
    ' static *.html files that it locates. This is where
    ' you would change the code for dynamic content.
    ' -------------------------------------------------- 
    strFolderArray = GetFolderTree(strPhysicalRoot)

    For lngFolderArray = 1 to UBound(strFolderArray)
        strUrlRelative = Replace(Mid(strFolderArray(lngFolderArray),Len(strPhysicalRoot)+1),"\","/")
        Set objFolder = objFSO.GetFolder(Server.MapPath("." & strUrlRelative))
        For Each objFile in objFolder.Files
            strExt = objFSO.GetExtensionName(objFile.Name)
            If StrComp(strExt,"html",vbTextCompare)=0 Then
                If StrComp(Left(objFile.Name,6),"google",vbTextCompare)<>0 Then
                    Call WriteUrl(strUrlRoot & strUrlRelative & "/" & objFile.Name, objFile.DateLastModified, "weekly", strFormat)
                End If
            End If
        Next
    Next

    ' --------------------------------------------------
    ' End of file system loop.
    ' --------------------------------------------------     
    If strFormat = "XML" Then
        Response.Write "</urlset>"
    End If
    
    Response.End

    ' ======================================================================
    '
    ' Outputs a sitemap URL to the client in XML or TXT format.
    ' 
    ' tmpStrFreq = always|hourly|daily|weekly|monthly|yearly|never 
    ' tmpStrFormat = TXT|XML
    '
    ' ======================================================================

    Sub WriteUrl(tmpStrUrl,tmpLastModified,tmpStrFreq,tmpStrFormat)
        On Error Resume Next
        Dim tmpDate : tmpDate = CDate(tmpLastModified)
        ' Check if the request is for XML or TXT and return the appropriate syntax.
        If tmpStrFormat = "XML" Then
            Response.Write " <url>" & vbCrLf
            Response.Write " <loc>" & Server.HtmlEncode(tmpStrUrl) & "</loc>" & vbCrLf
            Response.Write " <lastmod>" & Year(tmpLastModified) & "-" & Right("0" & Month(tmpLastModified),2) & "-" & Right("0" & Day(tmpLastModified),2) & "</lastmod>" & vbCrLf
            Response.Write " <changefreq>" & tmpStrFreq & "</changefreq>" & vbCrLf
            Response.Write " </url>" & vbCrLf
        Else
            Response.Write tmpStrUrl & vbCrLf
        End If
    End Sub

    ' ======================================================================
    '
    ' Returns a string array of folders under a root path
    '
    ' ======================================================================

    Function GetFolderTree(strBaseFolder)
        Dim tmpFolderCount,tmpBaseCount
        Dim tmpFolders()
        Dim tmpFSO,tmpFolder,tmpSubFolder
        ' Define the initial values for the folder counters.
        tmpFolderCount = 1
        tmpBaseCount = 0
        ' Dimension an array to hold the folder names.
        ReDim tmpFolders(1)
        ' Store the root folder in the array.
        tmpFolders(tmpFolderCount) = strBaseFolder
        ' Create file system object.
        Set tmpFSO = Server.CreateObject("Scripting.Filesystemobject")
        ' Loop while we still have folders to process.
        While tmpFolderCount <> tmpBaseCount
            ' Set up a folder object to a base folder.
            Set tmpFolder = tmpFSO.GetFolder(tmpFolders(tmpBaseCount+1))
              ' Loop through the collection of subfolders for the base folder.
            For Each tmpSubFolder In tmpFolder.SubFolders
                ' Increment the folder count.
                tmpFolderCount = tmpFolderCount + 1
                ' Increase the array size
                ReDim Preserve tmpFolders(tmpFolderCount)
                ' Store the folder name in the array.
                tmpFolders(tmpFolderCount) = tmpSubFolder.Path
            Next
            ' Increment the base folder counter.
            tmpBaseCount = tmpBaseCount + 1
        Wend
        GetFolderTree = tmpFolders
    End Function
%>

Note: There are two helper methods in the preceding example that I should call out:

  • The GetFolderTree() function returns a string array of all the folders that are located under a root folder; you could remove that function if you were generating all of your URLs dynamically.
  • The WriteUrl() function outputs an entry for the sitemap file in either XML or TXT format, depending on the file type that is in use. It also allows you to specify the frequency that the specific URL should be indexed (always, hourly, daily, weekly, monthly, yearly, or never).

Step 3 - Creating the Web.config File

The last step is to add the URL Rewrite rules to the Web.config file in the root of your website. The following example is a complete Web.config file, but you could merge the rules into your existing Web.config file if you have already created one for your website. These rules are pretty simple, they rewrite all inbound requests for Robots.txt to Robots.asp, and they rewrite all requests for Sitemap.xml to Sitemap.asp?format=XML and requests for Sitemap.txt to Sitemap.asp?format=TXT; this allows requests for both the XML-based and text-based sitemaps to work, even though the Robots.txt file contains the path to the XML file. The last part of the URL Rewrite syntax returns HTTP 404 errors if anyone tries to send direct requests for either the Robots.asp or Sitemap.asp files; this isn't absolutely necesary, but I like to mask what I'm doing from prying eyes. (I'm kind of geeky that way.)

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <rewrite>
      <rewriteMaps>
        <clear />
        <rewriteMap name="Static URL Rewrites">
          <add key="/robots.txt" value="/robots.asp" />
          <add key="/sitemap.xml" value="/sitemap.asp?format=XML" />
          <add key="/sitemap.txt" value="/sitemap.asp?format=TXT" />
        </rewriteMap>
        <rewriteMap name="Static URL Failures">
          <add key="/robots.asp" value="/" />
          <add key="/sitemap.asp" value="/" />
        </rewriteMap>
      </rewriteMaps>
      <rules>
        <clear />
        <rule name="Static URL Rewrites" patternSyntax="ECMAScript" stopProcessing="true">
          <match url=".*" ignoreCase="true" negate="false" />
          <conditions>
            <add input="{Static URL Rewrites:{REQUEST_URI}}" pattern="(.+)" />
          </conditions>
          <action type="Rewrite" url="{C:1}" appendQueryString="false" redirectType="Temporary" />
        </rule>
        <rule name="Static URL Failures" patternSyntax="ECMAScript" stopProcessing="true">
          <match url=".*" ignoreCase="true" negate="false" />
          <conditions>
            <add input="{Static URL Failures:{REQUEST_URI}}" pattern="(.+)" />
          </conditions>
          <action type="CustomResponse" statusCode="404" subStatusCode="0" />
        </rule>
        <rule name="Prevent rewriting for static files" patternSyntax="Wildcard" stopProcessing="true">
          <match url="*" />
          <conditions>
             <add input="{REQUEST_FILENAME}" matchType="IsFile" />
          </conditions>
          <action type="None" />
        </rule>
      </rules>
    </rewrite>
  </system.webServer>
</configuration>

Summary

That sums it up for this blog; I hope that you get some good ideas from it.

For more information about the syntax in Robots.txt and Sitemap.xml files, see the following URLs:


Note: This blog was originally posted at http://blogs.msdn.com/robert_mcmurray/

Upgrading a Baby Computer

I'd like to take a brief departure from my normal series of IIS-related blogs and talk about something very near and dear to the hearts of many geeks - ripping a computer apart and upgrading its various hardware components just because it's fun. ;-)

Several years ago I bought a Dell Inspiron Mini 1011 Laptop, which is a smallish netbook computer with a 10-inch screen. (Actually, I bought this as an alternate laptop for my wife to use when travelling, since she doesn't like to travel with her full-sized laptop.)  This computer eventually became a "coffee-table laptop" for our house, which houseguests use when they come to visit. Since the netbook computer is so small, our family has affectionately labeled it the "Baby Computer."

Recently my wife and I took a trip to Hawaii, for which I decided to leave my full-size laptop at home, and I brought the Baby Computer instead.  Since I had never needed to rely on the Baby Computer to do anything more than surf the web in the past, I hadn't realized how quickly it was starved for resources whenever I tried to edit photos or write code. (Yes - I actually write code while on vacation... writing code makes me happy.) The Baby Computer shipped with an underwhelming 1GB of RAM, which filled up quickly if I tried to do too many things at once, and it came with a 120GB 5400rpm hard drive. There's nothing that I could do about CPU speed, but as I slogged through the rest of my vacation using the Baby Computer, I resolved to research if the other hardware in this laptop could be expanded.

Figure 1 - Performance Before Upgrading

Once we got home from vacation I did some checking, and I discovered that I could expand the Baby Computer's RAM to 2GB, which isn't much, but it obviously doubled what I had been using, and I decided replace it's original hard drive with a 128GB solid-state drive (SSD). With that in mind, I thought that it would be a worthwhile endeavor to document the upgrade process for someone else who wants to do the same thing with their Dell Inspiron Mini 1011. (Of course, you are undoubtedly voiding your Dell warranty the moment that you open your laptop's case.)

First things first - Dell's support website has some great information about tearing apart your laptop; Dell provides a detailed online Service Manual with all of the requisite instructions for replacing most of the parts in the Dell Mini, and I used it as a guide while I performed my upgrades. That being said, the upgrade process was still a little tricky, and some of the parts were difficult to get to. (Although it seems like Dell may have made upgrades a little easier in later models of my laptop.)

So without further introduction, here are the steps for upgrading the RAM and hard drive in a Dell Inspiron Mini 1011 Laptop.

Step 1 - Remove the Screws from the Back of the Case

This step is pretty easy - there are only a handful of screws to remove.

Figure 2 - Removing the Screws

Step 2 - Remove the Keyboard

It's pretty easy to pop the keyboard out of the case...

Figure 3 - Removing the Keyboard

...although once you have the keyboard loose, you need to flip it over carefully and remove the flat ribbon cable from underneath.

Figure 4 - Detaching the Keyboard Cable

Step 3 - Remove the Palm Rest

This step was a little tricky, and it took me a while to accomplish this task because I had to wedge a thin screwdriver in between the case and the palm rest in order to pry it off. Note that there is a flat ribbon cable that attaches the palm rest to the motherboard that you will need to remove.

Figure 5 - Removing the Palm Rest

With the keyboard and palm rest out of the way, you can remove the hard drive - there's a single screw holding the hard drive mount into the case and four screws that hold the hard drive in its mount.

Figure 6 - Removing the Hard Drive

If you were only replacing the hard drive, you could stop here. Since I was upgrading the RAM, I needed to dig deeper.

Step 4 - Remove the Palm Rest Bracket and Motherboard

Once the hard drive is out of the way, you need to remove the motherboard so you can replace the RAM that is located underneath it. There are a handful of screws above and below the computer that hold the palm rest bracket to the case...

Figure 7 - After Removing the Palm Rest Bracket

...once you remove remove the palm rest bracket, you can flip over the motherboard and replace the RAM.

Figure 8 - Replacing the RAM

Optional Step - Cloning the Hard Drive

Rather than reinstalling the operating system from scratch, I cloned Windows from the original hard drive to the SSD. To do this, I placed both the old hard drive and the new SSD into USB-based SATA drive cradles and I used Symantec Ghost to clone the operating system from drive to drive.

Figure 9 - Both Hard Drives in SATA Cradles
Figure 10 - Cloning the Hard Drive with Ghost

Once the clone was completed, all that was left was to install the new SSD and reassemble the computer.

Figure 11 - Installing the New SSD

Summary

Once I had everything completed and reassembled, Windows booted considerably faster when using the SSD; it now boots in a matter of seconds. (I wish that I had timed the boot sequence before and after the upgrades, but I didn't think of that earlier... darn.) Running the Windows 7 performance assessment showed a measurable increase in hard drive speed, with little to no increase in RAM speed. Of course, since there was no speed increase for CPU or graphics, the overall performance score for my laptop remained the same. That being said, with twice the RAM as before, it should be paging to disk less often, so regular usage should seem a little faster; even when it does need to swap memory to disk it will be faster using the SSD than with its old hard drive.

Figure 12 - Performance After Upgrading

That's all for now - have fun. ;-)


Note: This blog was originally posted at http://blogs.msdn.com/robert_mcmurray/

Why I Personally Think the Zune Failed

First and foremost - I am not ashamed to admit that I am a card-carrying Zune fanboy. But that being said, as a faithful owner of several Zune devices, I am ashamed of the way that the Zune team at Microsoft so badly botched their product line; the Zune team was so out of touch with their target consumers that it borders on negligence. Here is my totally-biased list of reasons why I personally think the Zune failed.

My Top 10 Reasons Why the Zune Failed

Reason #1 - Microsoft entered the game with TOO LITTLE TOO LATE

There were a smattering of MP3 players on the market by the time that Apple's iPod hit the stores. I still have an RCA Lyra device that kicked butt in its day, but my personal favorites were the Creative Zen devices; you plugged a Zen player into your computer and it showed up like an external hard drive. To add music, you simply dragged & dropped music files anywhere you wanted; the Zen devices used your music files' metadata to sort by albums, genres, artists, etc.

When Apple's iPod hit the stores, its main rise to fame was its end-to-end story from iTunes to iPod, all of which belonged to Apple. Their devices were cool, and their advertising was stellar (as always). Even though they were overpriced, the iPod soon became "the product" that everyone wanted. The iTunes/iPod integration was closed to outsiders, which meant that Apple owned the end-to-end experience, and thereby collected all of the profits from it.

When Microsoft eventually realized that Apple was making enough money off their music/devices sales to save their company - which was formerly close to bankruptcy - they decided to create a device and end-to-end experience for themselves. But when Microsoft tried to do so, they mostly opted for feature parity with iTunes. What was Microsoft thinking? Instead of improving on the iTunes model, they were trying to break into an established market with a product that had little to offer that was above and beyond what consumers could already get.

FAIL.

Reason #2 - The early Zune end-to-end experience was terrible

I bought my wife a Zune for Christmas when they first released. Having owned and used several MP3 players in the past, I thought that it would be a similar experience; let me assure you, it was decidedly not a similar experience. I was so frustrated with the first-generation Zune software that I had boxed up the Zune and was ready to take it back to the store within an hour of trying to get it set up for her. I eventually elected not to do so, and I managed to get it working, but it was a crappy experience that made me apologize to my non-technical wife for burdening her with such a mess.

FAIL. FAIL. FAIL.

Reason #3 - You needed to use the Zune software to put files on the device

Customers wanted to use their Zune devices as external storage, but having to use the Zune software to transfer files to the device prevented that. The prevailing argument was that Zune followed the iTunes/iPod model, but who cares if that's the way that iTunes/iPod worked? Zune customers paid good money for their devices, and they wanted to store files on those. USB flash drives were still pretty pricey at the time, so opening the Zune platform to double as external storage would have been a fantastic selling feature, but that concept escaped the Zune team's leadership because they wanted to force users into having to use their @#$% software in the hopes that they would be tempted to buy more music/videos through Microsoft.

DESIGN FAIL.

[On a related note, the Windows Phone 7 team did not learn from the Zune's failure, and their devices still had the same, stupid Zune software requirement. BRAIN-DEAD FAIL.]

Reason #4 - You couldn't use Windows Media Player with the the Zune

Microsoft already made a killer media player application for Windows that worked with all the third-party MP3 player devices, but when Microsoft introduced their own MP3 player it didn't work with their existing Windows Media Player.

EPIC FAIL.

Reason #5 - Zune Didn't Support Plays For Sure

Microsoft spent a bunch of money cozying up to the music industry and MP3 makers with a program that was entitled Plays For Sure, whereby devices could be certified to play all Windows-based music files, whether they had copy protection on them or not. Even though all of these third-party companies went through the certification process, Microsoft's own player didn't have to; the Zune didn't support Plays for Sure.

SCREW YOUR PARTNERS FAIL.

[On a related note, this probably hastened the demise of WMA as a file format. SHOOT YOURSELF IN THE FOOT FAIL.]

Reason #6 - The Zune Software for Windows Sucked

For a long time - and I mean a really long time - the software that you needed to use with your Zune was next to worthless. It was slow, buggy, and ugly. By the time that the Zune team finally delivered a version of the Zune software that was actually worth installing, the battle for MP3 player supremacy was over and the iPod ruled uncontested.

SCREW YOUR OWN PLATFORM FAIL.

Reason #7 - Dropping the free downloads from the Zune Pass

The Zune pass tried to be the Netflix service of music, and in that sense it was ahead of the curve when it was introduced. Customers paid $14.95 a month, and in exchange they were granted free access to tens of thousands of DRM-based WMA music files - all of which they could download and play on their computers or Zune devices - and customers could play them as long as they kept their Zune pass up-to-date. In addition, customers got to keep 10 free songs a month in DRM-free MP3 format per month.

In September, 2011, some wunderkind in the Zune group decided to take away the 10 free songs and drop the price of the Zune pass to $9.99 per month. This person - whoever they may be - is an idiot. With the incredible amount of free music that is available on the Internet now, the free downloads on the Zune pass was the only feature of the Zune pass that made having a Zune pass worthwhile.

SCREW YOUR CUSTOMERS FAIL.

Reason #8 - Zune Required Users to Buy Music with 'Points'

The rest of the world works with actual money, but the Zune service required customers to use 'points' to buy music or videos, and points did not map directly to dollars and cents. On Amazon or iTunes, music was typically $0.99 per track, but on Zune it was typically 89 points per track. WTF? What the @#$% was a 'point'?

So let's say that you wanted to buy a music file to download; the Zune software would inform you that had to buy points first, which would have some weird exchange rate that didn't make sense. For example, if you bought 400 points for $4.99, that would mean that your 89-point music file actually cost $1.11, which was $0.12 more than Amazon or iTunes. When Microsoft combined their crappy points-based purchasing system with their overpriced music, they created an environment that was a truly horrible customer experience.

WTF FAIL.

Reason #9 - It Took Too Long to Market Zune Overseas

The iPod was dominating media player sales all over the world, but believe it or not - there are people that simply don't like Apple. I constantly saw people all over Europe that were clamoring for a Zune, and Microsoft didn't deliver.

DRAG YOUR FEET FAIL.

Reason #10 - No Zune Software for the Mac

Believe it or not, I saw a lot of Mac users who were asking for Zune software on the Mac. I'm not sure if these people were also iTunes users or not, but I think that the concept of the original "10 free songs" with a Zune pass was appealing to them. Sadly, Microsoft did not deliver - and a whole slew of potential customers were left high-and-dry.

SCREW YOUR POTENTIAL CUSTOMERS FAIL.

In Closing

Despite all this negativism, the Zune team did deliver some great products - I still own several Zunes from the various series of players, although it now feels a lot like owning a Tucker automobile or a Betamax VCR.

Here are some of the coolness factors that Zune had:

  • Wireless Networking - when the first Zunes came out, they were the first players to have wireless networking, which allowed Zune users to share files. I remember at the time that some of my Apple fanboy friends remarked that this was a stupid feature; their usual mantra was: "Who wants networking in a handheld device?" Seriously - I got asked that a lot. Now most every player worth its mettle has Wi-Fi support, as do iPhones, Windows Phones, etc. Zune was well-ahead of the curve.
  • Larger Storage at Cheaper Prices - for many versions of the Zune, you would get the same features in your Zune for a lot less money than a comparable iPod. The Zune didn't catch on, but that certainly wasn't due to price-per-feature.
  • Larger & Better Screens - when the Zune first came out, it's screen was much larger and better than its competitors' screens.
  • HDTV Support - the Zune HD was a great device, and one of the really cool features was output to HDTV from a really tiny device.

In the end, it was very sad for me to see the Zune fail; the Zune was simply a victim of being superior device with inferior product management.

Zero Dark Thirty

Here's a weird but true story for you: my wife and I went to the movies tonight to see Zero Dark Thirty, (which was a good movie in case you were wondering). Right at the point where the Navy Seals [spoiler alert] pull the trigger on their main person of interest, a man in the theater started yelling, "VIOLENCE ONLY BEGETS VIOLENCE!!! VIOLENCE ONLY BEGETS VIOLENCE!!!", and he ran out of the building while continuing to scream that phrase like a cultish mantra.
 
This leads me to the following quandary: it was publicly and deliberately advertised what the subject of this movie was about ahead of time, so there can be no question that everyone in the auditorium knew before they walked through the theater doors that they were there to watch the CIA and Navy Seals take down the principle terrorist who planned the tragedies of September 11th, 2001. So why would anyone go to this movie expecting anything other than violence?

This movie has an "R" rating because of the violence; and there is a lot of violence in this movie. But oddly enough, the person in question did not run screaming from the theater when [spoiler alert] a lot of European and American lives (both combatants and non-combatants) were premeditatedly and violently killed throughout the two hours of the movie which preceded the brief actions that were the cause of his outburst.
 
The whole affair was surreal, and I am sure that several people (not just me) were nervously wondering if we were about to see a repeat of the tragic theater shootings that took place at the Batman premier last summer. I'm beginning to think that I'll just wait for everything to come out on Netflix before I watch it in the future.