Sunday, December 20, 2015

Retooling with Abstraction

Below you will find a presentation I prepared over a year ago for a customer who was contemplating replacing their entire software back bone, moving from one legacy full stack to another.  The reasons for their contemplating this are ultimately not material to the discussion, except in that it provided the catalyst for my own thought processes.  I could see momentum behind the "do something" mantra was building and sought to help them avoid rushing off the cliff in a way that would result in huge disruptions.

My slide deck was intended as an introduction to both the MVC design pattern and software abstraction as a concept, presenting it at a time when they would most benefit from adopting the sort of approach it represented.  Ultimately they took a similar path to the one I outlined, but with a critical difference in that they moved to more SASS systems rather than build their own.

If you have questions, please feel free to ask me.  I've helped large companies do this sort of transition on many platforms - the tools themselves are not as important as the way they are employed.

The Case for Retooling and Abstraction

Feel free to share this if it makes the conversation with your management or stakeholders easier.  I only ask that you share it from the source link so that I have some idea as to how widely it is used.

Monday, December 14, 2015

Free File Recovery Tool: PhotoRec


CG Security has a free tool for photo and file recovery called PhotoRec.

http://www.cgsecurity.org/wiki/PhotoRec

You can get it for most file systems as a stand-alone tool which will run in a command-line style interface.

I used it this weekend to recover files for a friend that went missing after his upgrade to Windows 10.  So, a note about that - if you have any data outside your home path, usually C:/users/yourname, you will lose it when you upgrade to Windows 10 unless you take steps to back it up.  

In my friend's case, the files were still on his computer, by the grace of God, and not actually over written with new data by the installation of Windows.  

I will make a guess that if you're reading this post, you've lost some files (or more likely have a friend who has) so let's review a couple of things everyone should do before the walk-through.

1. Always back up your data.  Put on another computer, a server, a cloud service such as Windows One Drive, or DropBox or Google Drive or set up a home backup server.

2. Use an automated tool to make your file backups, wait for it..., automatic.  There's nothing worse than having spent money or time on a backup solution that provides no benefit because you forgot to use it.

3. On Windows, as with Linux, User files should always be kept in the user's home directory.

So, a quick how-to for Photo-Rec, in this case, on a Windows Laptop that offered 2 USB ports.

1. Download the tool to a USB drive from which you will run it.

2. scrounge up a couple of extra USB drives for recovered data.

3. Boot the system and plug in the USB drives.

4. Run Photo Rec, follow the default selections for the most part, and then navigate to your second USB drive and use it as the target for recovering your data.

It's pretty much that simple.  I suggest, though, investigating the file filters before you run the tool.  It will find everything that hasn't been totally obliterated.  Narrowing the search filter will save you time and reduce huge amounts of false positives from winding up in the recovery folders created by the tool.  You will still need to review the recovered data and cherry pick the things you wanted.

We were looking for office documents in our example and found a surprising number of things that were not exactly office documents in our recovered files folder.  You can easily spot the valid files by looking for complete file names and turning on the Author's column in Windows Explorer.  You can also use Windows Search on the recovery target USB drive with advanced options to search for text within the recovered files.  

While this process is pretty easy for a technically capable person, it does require some experience to pull off without making matters worse. If you need a hand, leave a comment - I'm happy to help for a reasonable fee.

Tuesday, November 10, 2015

Refactoring During Requirements Analysis

Refactoring is a process of reviewing computer code to look for improvements.  Generally these improvements identify redundancies and replace repeating code with modified general purpose (or specific purpose) code, relocate variables for easier maintenance, involve performance reviews and resulting tweaks, and so on.  Given a proper project life cycle, this should be an ongoing effort.  But there is a not-so-obvious time before the writing of the code even commences that the refactoring mind-set is highly applicable.

I'll use a recent case by way of illustration.  I spent the better part of last November through June writing code to handle the transformation of payroll data from one system, through an effective in-memory pivot table, out to another format and then uploading that resulting data to a remote webservice.  The rules for pivoting the data were different for almost every state.  In our case, I wound up creating a base class for the timesheet workflow and then subsequent subclasses to handle each specific location's special rules.  In all we identified three requisite subclasses to effectively handle every state in the US.

Getting to the point where I could clearly see the overlapping functionality for each workflow took some time.  We went through 9 iterations to finally get California's ruleset working correctly and wound up making changes to the superclass a few times along the way.  But before we even got to that, I started with a look at the customer supplied flow charts.  They were tweaked by the project manager / business analyst and then I had a go at them.  We worked back and forth with the business as I found logical black holes and contradictions and rules were decided as we went.  This "refactoring" of the paper flow chart provided an iteration phase during design that allowed us to have a pretty good, but  far from perfect, topographical view of the class structure that would result before I began writing the code.

Semantics being what they are, we discovered during testing that several terms suffered from the oft assumed incorrect perspective and definitions.  We thought we knew what the business was saying because they used terms that were common, but within the context of their business flow, some common terms had special meanings.   Had we refactored, or iterated with a view to refinement and correction, during the requirement gathering process as well as the design process, we might have caught those issues.  Some of them were pretty sneaky though.  But, that should be a cautionary tale to always include an insanely detailed glossary of terms.

So, next time you're faced with a complex task, take it in several passes, make paper models like sketches and flow charts and really question your assumptions and dig into the details.  There's always pressure to rush into the act of producing, whether building, coding, or making drawings.  Resist the headlong rush, take the time to really understand the goals, and go over it again and again until you're sure you've got it.  Even then, you'll be refactoring later.  :-)

Friday, October 30, 2015

Finding out what works - with Fuzzing

Fuzzing is an interesting concept.  From a software testing perspective, it's a means of finding out where code will break by flooding it with a steady stream of commands, good bad or indifferent, to see if you can break it.  The test input is literally garbage, which could be described as fuzzy data, purpose made to be random enough to find the things you didn't think of that might break your code.

I like to use fuzzing for a secondary purpose: to find out what works, not what is unexpectedly broken.  I remember hearing about an undocumented command with a product I was working with.  It took integers as arguments and had a different UI element for each different integer value passed.  It occurred to me to flood it, using a loop, with a series of numbers well beyond the published range to see what else might be hiding in the product code.  I was rewarded with several hits, some of them very useful.  You might not ever want to build your application around undocumented features, but sometimes they are stable and useful and it can be fun to show off a bit by being able to leverage them.

My recent example for using PLINK to send commands over TCP to a Smart TV presented a potential surface for attack with this method.  I wanted to find out if, with any of the published 4 bit codes, any serially entered 4 bit number would combine to make a code that could return some additional useful information from the TV.  Unfortunately, my testing struck out, but the simple CMD batch file I came up with is very useful for documenting the results.



@echo off 
echo Fuzzing Interface... 
set cmdlist=TVNM MNRD SWVN IPPV WIDE 
setlocal ENABLEDELAYEDEXPANSION 

for %%a in (%cmdlist%) do ( 
        for /L %%n in (1,1,9999) do ( 
                set "cmd=%%a%%n  " 
                echo !cmd! >cmd.txt 
                echo !cmd! 
                plink 169.254.253.20 -P 10002 -raw < cmd.txt >results.txt 
                timeout 1 
                set /P r=< results.txt 
                if [!r!] EQU [OK] echo !cmd! !r!>> OKcommands.txt 
                if [!r!] NEQ [ERR] echo !cmd! !r!>> goodcommands.txt 
                 if [!r!] EQU [] echo no response 
)) 
echo Done. 
@echo on 

First this generates a command fine, called cmd.tx by combining a command sequence with an integer.  So our first command would be TVNM1, and our next would be TVNM2 on up to TVNM9999, the maximum range for the input.

Next, the code uses this command file as input to PLINK, and sends the response from PLINK to results.txt.  

Then, it reads the result file into a variable, r, which can be evaluated to see if the command that was used is sent to the file of things that simply return OK or to the file that returns something other than ERR.

It's not perfect, there are some gaps in the code I decided not to take the time to close because we had moved past the point in the project where it would provide useful information.  For one, the command text should always only be 8 characters and a carriage return.  This code doesn't trim the resulting command down as the integers grow.  I also could do a better job discriminating the useful output into different files, but a glance at the goodcommands.txt told me what I wanted to know.

There are lots of ways to use this approach and it can be employed in almost any system or language.  A windows cmd batch file is probably one of the more simplistic, if not powerful, ways to use this method.

Wednesday, October 21, 2015

Using PLINK and Batch Files to Control a Smart TV

Editors Note
What do you know - some new content already.  I think I've pulled all I'm going to pull out of the old archive.  There is so much there that holds little relevance for today it just doesn't make sense.  Going forward, new posts will appear as time and opportunity permit.




I had a customer requirement recently to control a 80" television from a small computer plugged into the HDMI port.  The little computer has one solitary USB port, and the television a single TCPIP port.  With a USB to TCP dongle, I was able to connect the computer to the network port for the television. 

The TV in this case is a SHARP AQUOS... a really nice TV if you have $1400 and want a picture the size of a large window. Communicating with it proved to be a challenge but with enough research and trial and error, I figured out what to do (many thanks to the internet and Google Search). 

First, it calls for Putty, the open source telnet client, rather than the basic Windows telnet client.  The former provides the ability to specify "raw" as the protocol, which is essential.  Secondly, I used plink instead of putty, the command line version by the same developer.  From there, I wrote two files for each command: a cmd line command structure, and a command file to send to the TV. 

TVON.BAT: 


plink 169.254.11.133 -P 10002 -raw < plinkTVON.txt 


This command connects to the IP address assigned by the TV for its port.  We use what is called a cross over or null cable - a specially wired network cable, to make the connection without a server between the TV and the computer.  The TV listens for commands on a configurable port number and defaults to 10002.  Using < plinkTVON.txt allows us to pass the content of the text file with the connection. 

plinkTVON.txt 


POWR1___{crlf} 

The SHARP televisions take 8 bit commands with a return code at the end.  The text file allows me to put a carriage return where I show the {crlf} above and so "executes" the command on the television.  Underscores are spaces. 

Now, one thing that puzzled me, I couldn't actually turn on the TV once I sent a POWR0___ command.  As it turns out, you have to send RSPW1___ to tell the TV to remain in a standby state capable of accepting the POWR1____ command.  The documentation does not make this clear and it was by the grace of God and a really random internet posting that I learned about that little bit of undocumented goodness.   

Now, we can schedule the television to turn on and off to save power when it's not being used to show videos and slide shows at the customers place of business.  This makes digital signs and internal communications a lot more configurable and powerful.  There is more to the overall solution that I may have time to discuss later, but this was the difficult part I thought worth sharing. 

Tuesday, October 20, 2015

LotusScript: Bitmasks

Originally published 

Sat 18 Nov 2006

Editors Notes
I decided to pull this one out of the archive because it presents a clear lesson on bitmasks, how to make use of them and hopefully how to understand them.  The function presented is really only for those purposes, being difficult to use in practice due to the time consumed figuring out which things you want to add together to get the desired result.

Original Post

Bitmasks are useful little items to have in your coders toolbox. A bitmask allows you to extract multiple on / off flags from one integer. You may have seen this when working with some Lotus Script built-in functions, like MessageBox, where you can pass an integer or addition of integers in to specify icon and button options. Here I aim to show you how it works and explain it a bit (no pun intended.)

Let's look quickly at what bits are. Bits are the 1's and 0's of binary language. They are at the lowest level of all programming languages, representing high and low voltages, in many cases, within your computer's memory. Back when the Commodore 64 came out, it used an 8-bit operating system. In that environment, the largest integer you can represent with a single byte (made of 8 bits) would be 2^0 - 2^7 (0 to 255). The important thing to understand is that the bits represent powers of 2, usually starting from right to left with 2^0.

 
Bit value   = 128    64    32    16    8    4   2   1
Bit setting = 0      0     1     0     0    1   0   0
Total Value =              32       +       4     =    36
This is an easy example, and one you have probably used with the MessageBox function (MB_ICONQUESTION + MB_YESNO). Basically the bits tell us that 32 and 4 are on or high, for a total of 36. If we are passing this value to a function, we can just use 36 and it will know via logical comparison that we meant 32 + 4 or 00100100.

Now we hopefully understand how LS uses this internally, to a degree. How do we make use of this same power? Wouldn't it be nice to create a function that takes one value as opposed to a myriad of flags? Just think how much more compressed a function like DialogBox could be without all those true and false flags.

So, that sounds like a nice demonstration - let's write a more condensed wrapper for Workspace.DialogBox. It's not terribly useful or efficient, but it's a quick way to show you how to use a bitmask and provide a template so you can employ it with your own functions. The current function definition appears below.

flag = notesUIWorkspace.DialogBox( form$ , autoHorzFit , autoVertFit , noCancel , noNewFields , noFieldUpdate , readOnly , title$ , notesDocument , sizeToTable , noOkCancel , okCancelAtBottom )
What a mess! Many of those optional parameters are accepted as boolean values, true or false, which maps logically to binary, on or off. In our example below, we are arbitrarily assigning a bit position in a byte of suitable length to each flag. We'll use the And operator of LotusScript to determine which flags are on or off based on the receipt of a single value (a long in this case).

Function UIDialog(sForm as string, sTitle as String, nDoc as NotesDocument, hFlags as Long) as integer
Dim wk as new NotesUIWorkspace

' Set up our bit registers and note their arbitrary values
Dim autoHorz as boolean ' 1
Dim autoVert as boolean ' 2
Dim noCancel as boolean ' 4
Dim noNewFields as boolean ' 8
Dim noFieldUpdate as boolean ' 16
Dim readOnly as boolean ' 32
Dim sizeToTable as boolean ' 64
Dim noOkCancel as boolean ' 128
Dim okCancelAtBottom as boolean ' 256

' set our registers based on the passed in flags value. the AND comparison will only evaluate TRUE if the bit compared to the flag can be factored out of it.
autoHorz = 1 And hFlags
autoVert = 2 And hFlags
noCancel = 4 And hFlags
noNewFields = 8 And hFlags
noFieldUpdate = 16 And hFlags
readOnly = 32 And hFlags
sizeToTable = 64 And hFlags
noOkCancel = 128 And hFlags
okCancelAtBottom = 256 And hFlags

' now we call our wrapped target with all the gory flags
UIDialog = wk.DialogBox(sForm, autoHorz, autoVert, noCancel, noNewFields, noFieldUpdate, readOnly, sTitle, nDoc, sizeToTable, noOkCancel, okCancelAtBottom)
End Function


Now, for the payoff for our extra effort. Lets say we want to call a dialog box that fits to a table in a form called "MyTableDialog" which has it's own OK and Cancel button built in. For size to table to work right, I also need to specify autoHorizFit and autoVertFit. NoOKCancel by itself produces just a Cancel button, oddly, so I'll add the flag for NoCancel as well.

call UIDialog("MyTableDialog", "Some nice Title", uidoc.document,1+ 2 + 4 + 64 + 128)
I showed the component flags expressed as addition, but we could have written it as 199 for even more brevity. When the function executes, each And operation will evaluate to False except where our mask will allow the flags arbitrary value to pass. The values we masked against the hFlag property for each of these can be factored out as powers of 2. What this means in that any random value between 0 and 511 will be acceptable and will turn on or off an appropriate number of flags. The only draw back is we have to consult the code or documentation to use this kind of flag compression. Otherwise, it's easy to implement and provides quite a bit of power.

LotusScript: superString Class

Originally published 

Wed 19 Sep 2007

Editors Notes
Admittedly, the usefulness of Lotus Script has waned in the past half decade as the once ubiquitous product has steadily lost market share in some regions while gaining it it others, and shifting from Lotus Scripting as the core language offering towards Java and IBM's rather bizzar implementation of JavaScript Lotus Formula language.  But, this library is one of the few that I enjoyed creating, using and sharing.  Some of these functions are no longer needed thanks to updates to the Lotus Script engine with release 8.

Original Post

Update:Added GetAllSubstrings, rewrote Substring. v 0.0.9 07/31/2008
Update:Added Prepend. v 0.0.8 10/4/2007
Update:Added a Trim wrapper that operates on the buffer. v 0.0.7 10/3/2007
Update: My appologies, some comments were lost when the Blog server took a dive this past week. Sean Burgess had previously commented and a coworker provided an update to the code, which appears below. Current version is 0.0.6 as of 9/28/2007.
Here's a little something I've been working on. It's a script library that offers a few string functions that we would otherwise be able to overload onto the Script data type if it was an actual object built on a base object like in Java, rather than a primitive.
I included some of my favorite String operations from JavaScript for the most part and added a few others I find handy. The class uses a string primitive as a buffer, which has an upper limit of 2 GB. Should be plenty for just about anything you want to do with a String.
Highlights:

  • .Slice
  • .Length
  • .Append
  • .pos
  • .subString
  • .Strip
  • .Contains
  • .ToList
  • .setText
  • .Text
Classes are hard to stop playing with once you start. If nothing else, this should serve to illustrate how handy a class can be for even mundane tasks.

'superString: 

Option Public
Option Declare
Class superString 
 
 'superString:  
 
 
''' Freely Distributable with copywrite intact
'   Copywrite 2007 - Datatribe Softwerks, Ltd. - Jerome E. Carter, II
 
''' v 0.0.9 - Jerry Carter 7/31/2008 - added GetAllSubstrings, rewrote Substring to wrap it as it's easier to understand and troubleshoot
 
''' v0.0.8 - Jerry Carter 10/4/2007 - added Prepend Subroutine
' Should have been obviouse to begin with, but I didn't think of it till I needed it!
' prepends the supplied string to the buffer
 
''' v 0.0.7 - Jerry Carter 10/3/2007 - added Trim Subroutine.  
' Simply wraps the buffer in the Trim command.  Reduces complexity of 
' code needed externally to perform the operation against the class.
 
''' v 0.0.6 - correction provided by M Burgo
' Substring was incorrectly finding the last instance of the suffix rather than the first
 
''' v 0.0.5 - Jerry Carter - 9/19/2007
' added Sub Strip which operates on the resident buffer.  end result available via Text method.
 
 
 ''' superString - by Jerry Carter - 8/31/2007 - v 0.0.4
 ' Notes base data types are not derived from an object but are static final primitives
 ' therefore we can not declare a class like  Public Class superString as String and be able to extend String
 
 ''' Private Members
 Private buff As String '  strings are limited to 2GB - should be sufficient for most things
 
 ''' Constructor
 Sub new (initVal As String)
  Me.buff = initVal  
 End Sub
 
 
 ''' Public Methods '''
 '---------------------'
 
 '''  Append  '''
 ' Adds the inbound string to the end of the buffer
 Public Sub Append(inputStr As String)
  Me.buff = Me.buff + inputStr
 End Sub
 
 ''' Prepend
 '  Add the inbound string to the beginning of the buffer
 Public Sub Prepend(inputStr As String)
  Me.buff = inputStr + Me.buff
 End Sub
 
 ''' ToList '''
 ' Breaks the buffer into an unordered list, removing the delimiter in the process
 Public Function ToList(delim As String) As Variant
  Dim tmpList List As String
  Dim tmpArr As Variant
  tmpArr = Split(Me.buff,delim)
  Dim i As Integer
  For i = 0 To Ubound(tmpArr)
   tmpList(Cstr(i)) = tmparr(i)
  Next
  ToList = tmpList
 End Function
 
 ''' Strip
 ' Removes the supplied string argument from the buffer.
 Public Sub Strip(stripStr As String)
  Me.setText Join(Split(Me.text,stripStr),"") 
 End Sub
 
 
 ''' Text '''
 ' returns the buffer as a string
 Public Function Text() As String
  Text = Me.buff
 End Function
 
 ''' Length '''
 ' returns the total number of characters as a Long
 Public Function Length() As Long
  Length = Len(Me.buff)
 End Function
 
 ''' GetAllSubstrings
 ' returns all instances of the substring found between the supplied prefix and suffix
 ' e.g.  My <%tagged%> markup should produce <%bonus%> material
 ' returns a list containing "tagged" and "bonus" if <% is the prefix and %> is the suffix
 Public Function GetAllSubstrings(prefix As String, suffix As String) As Variant
  On Error Goto eh
  Dim blist As Variant
  blist = Me.ToList(prefix)
  Dim clist List As String
  If Islist(blist) Then
   Forall chunk In blist
    If Instr(chunk,suffix) > 0 Then
     clist(Listtag(chunk))= Left(chunk,Clng(Instr(chunk,suffix)-1))
    End If
   End Forall
  Else
   clist("error") = "A substring list could not be formed with the supplied prefix and suffix"
  End If 
  GetAllSubstrings = clist  
  Exit Function
eh:
  Msgbox "Error in GetAllSubstrings: " + Error + " at " + Cstr(Erl)
  Exit Function
 End Function
 
 ''' SubString '''
 ' returns the string appearing between the prefix and the suffix
 Public Function SubString(prefix As String, suffix As String) As String
 
  ' Updated 7/31/2008 to take advantage of new function GetAllSubstrings
  Dim blist As Variant
  blist = GetAllSubstrings(prefix,suffix)
  Forall n In blist
   Substring = n
   Exit Forall
  End Forall
  
 End Function
 
 ''' Slice '''
 ' works like java string.slice(startpos,endpos)
 Public Function Slice(dstart As Long, dend As Long) As String 
  Slice = Mid$(Me.buff, dstart, dend-dstart)
 End Function
 
 ''' SetText '''
 ' replaces the buffer with the inbound string
 Public Sub SetText(newval As String)
  Me.buff = newval
 End Sub
 
 ''' Contains '''
 ' Simple test to see if the parameter is anywhere in the buffer
 Public Function Contains(strIN As String) As Boolean
  If Instr(Me.buff,strIN) > 0 Then
   Contains= True
  Else
   Contains = False
  End If
 End Function
 
 ''' Pos[ition] '''
 ' returns the position of a substring is a Long
 Public Function Pos(strIn As String) As Long
  Pos = Instr(Me.buff,strIn)
 End Function
 
 '''Trim'''
 ' performs LS trim on the buffer
 Public Sub Trim()
  Me.buff = Trim(Me.buff)
 End Sub
 
End Class



LotusScript: Binary Registry Values Made Easy

Originally published 

Wed 27 Aug 2008

Editors Notes
One of the truly "hard won" bits of knowledge.  I spent the better part of a day or two getting this to work out just right.  It was such a bizzar requirement, but at long last, there was a way to get what the customer wanted.

Original Post

For a long time, you've (probably) known that you can use the Windows Scripting host to read and write string values to and from the Windows Registry. With it, you can also read and write short REG_BINARY values. But in some cases, you need to write long arrays of REG_BINARY data to the registry and the Windows Scripting host won't help you.
First, the obligatory back-story. I've been working on a very particular click saving requirement the past three days for my current customer. Simply put, when program X file save dialog is activated, it should automatically be in the predesignated folder. Easy enough when you are calling the File Save dialog yourself, not so easy when you are shelling a program and waiting for it to raise the File Save itself.
I spent considerable time playing with the RegSetValueEX API function found in the advapi.dll. (Considerable time means you're getting quite a bargain just for the price of reading this article.) I eventually got to the point where I could get it to write REG_BINARY values, but there was a problem. My data looked like this.
0000 28 89 DB 04 10 89 DB 04
0008 BF 05 00 00 0E 00 00 00
0010 01 00 01 00 07 00 00 00
0018 B8 CC BB 08 00 00 36 00
0020 00 00 00 00 70 64 66 77
0028 72 69 74 65 72 2E 65 78
0030 65 20 63 3A 5C 
That 70 I have highlighted at address 0024 should be at address 0000. Wondering about the addresses? Think Hex. In Hexadecimal, 0010 is not 2 greater than 0008, it's 8 greater. (0008 0009 000a 000b 000c 000d 000e 000f 0010). I have no idea where the garbage characters preceding my data came from, but they were pretty consistent regardless of whether I passed ASCII or HEX values.
After much research, I found mention of the Windows Management Instrumentation COM object on an MSDN article. Based on that example, I created the class shown below which provides you a simple and easy way to send up a string and have it properly inserted into the registry so that up at position 0000, we would see the expected hex value 70. Here's what it should have looked like.

0000 70 00 64 00 66 00 77 00
0008 72 00 69 00 74 00 65 00
0010 72 00 2E 00 65 00 78 00
0018 65 00 00 00 43 00 3A 00
0020 5C 00 49 00 6D 00 61 00
0028 67 00 65 00 73 00 5C 00
0030 78 00 78 00 31 00
Notice also, the characters are spaced apart with an empty byte. At this point, I'm really not totally sure why. What we see here are the hex values of the ASCII values derived from the input string. A spacer provides enough room that you could put a Unicode value within the Basic Multilingual Plane across the four positions. I'm still learning about Unicode and the windows registry itself, though, so I could be way off there. What I do know is I put hex in, and get decimal ASCII values back, which is strange - I would have though, hex in, hex out... but then again it's Windows. :-)
WMI Wrapper Class
This ONLY wraps the functionality discussed above. Have fun, and be careful.

Class WMIWrapper
 Public wmi As Variant
 Public haserr As Boolean
 Public lasterr As String
 
 Sub new()
  Dim strComputer As String
  strComputer = "."
  
  Set Me.wmi=GetObject( "winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv") 
  
 End Sub
 
 Public Function writeBinRegKey(regnode As Variant, keypath As String, keyname As String, keyvalue As String) As Boolean
  On Error Goto eh
  Dim sValues() As String  
  Call makeHexArray(keyvalue,sValues)  
  Me.wmi.CreateKey regnode, keypath  
  Me.wmi.SetBinaryValue regnode,keypath,keyname,sValues
  writeBinRegKey = True
  Exit Function
eh:
  writeBinRegKey = False
  Me.haserr = True
  Me.lasterr = "public function writeBinRegKey - " + Error + " at " + Cstr(Erl)
  Exit Function
 End Function
 
 Public Function readBinRegKey(regnode As Variant, keypath As String, keyname As String) As String
  On Error Goto eh
  Dim sValues As Variant
  Me.wmi.GetBinaryValue regnode, keypath, keyname, sValues
  readBinRegKey = AscArrToString(sValues)
  Exit Function
eh:
  Me.haserr = True
  Me.lasterr = "public function readBinRegKey - " + Error + " at " + Cstr(Erl)
  Exit Function
 End Function
 
 Private Function AscArrToString(ascArr As Variant) As String
  On Error Goto eh
  Dim i As Long
  Dim buff As String
  
  For i = 0 To Ubound(ascArr)
   If Not ascArr(i) = 0 Then
    buff = buff + Chr(ascArr(i))
   End If
  Next
  AscArrToString = buff
  Exit Function
eh:
  Msgbox "Error in AscArrToString - " + Error + " at " + Cstr(Erl)
  Exit Function
 End Function
 
 
 Private Sub makeHexArray(inputstr As String,tmparr() As String)
  On Error Goto eh
  Redim tmparr(0)
  Dim i As Long
  Dim tmpasc As String
  Dim tmpstr As String
  Dim pos As Long
  pos =1
  For i = 1 To Len(inputstr)*2
   Redim Preserve tmparr(i-1)
   If (i-1) Mod 2 = 0 Then
    tmpstr = Right(Left(inputstr,pos),1)    
    tmpasc = "&H" + Cstr(Hex$(Cstr(Asc(tmpstr))))    
    tmparr(i-1) = tmpasc
    pos = pos + 1
   Else
    tmparr(i-1) = "&H" + Cstr(Hex$(0))
   End If   
  Next   
  Exit Sub
eh:
  Me.haserr = True
  Me.lasterr = "private sub: makeHexArrap - " + Error + " at " + Cstr(Erl)
  Exit Sub
 End Sub
End Class

Web: JSON or XML?

Originally published 

Mon 8 Dec 2008

Editors Notes
Some thinking out loud on the topic of portability for complex data objects.  To this day, I more prefer JSON for UI programming and XML for integrated data delivery.  

Original Post


The question, JSON or XML, is not posed so much as a turf war inducing prod but as an examination of propriety. When would you use either and what are some favored methods (and frequent pitfalls) worth discussing?

In my brief experience with these two data encapsulation formats, I've learned things are never as easy as they appear. With XML, you have schema and validation concerns. With JSON you have validation in the sense that it must be valid JavaScript. With both you have to have a way to communicate the meaning of the content to the consumers, either via a description file (like a WSDL) or through documentation.

Bandwidth concerns really aren't so much a concern as they were in the past. The fact that you're loading the data as a message and not the entire page makes any delay small in either case, but likely smaller for JSON as there is very little space wasted on markup. But this does raise the question of security. An easy way to access JSON data is to use an eval statement. Unless you're the only consumer, e.g. you don't plan for others to use it, you would have a difficult time convincing folks to use your service. It goes without saying (which is my way of saying I'm going to say it) that you should never eval or use a hosted script you don't own or explicitly trust.

So, presently, it seems the arguments for and against each balance out. The only real effort is coming up with a schema and designing your object. In that case, JSON is the easier choice though there are plenty of tools that make it a simple task for XML as well. Processing time likewise could be considered to be function of the complexity and size of the message.

Horses for courses, in the end. If it's a small bit of data, JSON is probably the quickest way to get rolling. If it's going to be a complex affair with public exposure or you have standards based concerns (there is presently little in the way of adopted JSON standards) you probably are warranted the minor extra effort and thought that goes into XML.

I will say, though, XML requires far more planning and forethought than JSON. So again, here we have not so much a detraction for XML as a qualifier for when to use it. Robust systems designed to scale well should probably use XML. Small systems (or small components) with little or no plan to ever expose the functionality are probably quickest served by JSON.

As a side note, I found it interesting to know that the State of Ohio has totally disallowed AJAX for the moment as it presents "too much surface area" for potential attacks. I found that to be an interesting statement, but these are the folks that are paid to be paranoid and keep our private data private, so I take it with some authority. If you found yourself in this situation, JSON will be harder to implement without a special server component to evaluate JavaScript server side (something that is becoming more common by degrees) and XML (readily consumed by many server side languages) becomes the clear choice. This brings up another instance where you'd want to use XML - where the data provided must be consumable by a server based service. The abundance of XML parsing tool kits for server side processors makes it a defacto standard in this scenario.
Any one have strong feelings one way or the other?

JavaScript: JQuery Event Binding Tip

Originally published 

Tue 9 Dec 2008

Editors Notes
Many of the tips I had in my original blog were the sort that I discovered through trial and error and quickly posted for safekeeping, such as this item which did in fact take me some time to figure out.

Original Post


I've been using JQuery a LOT at work lately and loving it. It lives up to its promise to allow you to do more and write less code. Presently I'm working on a totally dynamic interface using JQuery a lot. I discovered an interesting thing with Event binding though worth noting.

When you bind an event, it's like writing the Javascript directly into the HTML. For example, $('#somediv').click puts your function in the OnClick event for the target element. What I found was that this is an append, not an overwrite by default. So if I call a function that performs the Event binding more than once, my function is now 'bound' multiple times, which would cause my OnClick event to fire the target function multiple times, likely or possibly with unintended consequences.
To prevent this from happening, just call .unbind before you bind.

$('#somediv').unbind('click');
$('#somediv').click(function() {stuff;});

After that, the function is only bound a single time each time, not cumulatively. You can imagine that took more than a couple of minutes to figure out!

Editors further note...
It's far safer to bind events to the top level container, one time, at startup.  All events bubble up to the top level container*.  (* usually).

Tools: IDE's and more

Originally published 

Mon 9 Feb 2009

Editors Notes
The first three tools here are still things I use regularly.  Aptana is in version 3 or higher now and XMLPad and Diffmerge have just been solid tools to have around for a long time.  Flex, something I'm running across a lot in my review of my old blog articles, is fading now as a distant memory.  It was a tremendous idea at one time but the last I heard Adobe had pretty much killed it off.  I could be wrong, I've not had need of using it for years.

Original Post

Several gentle reminders lately to the fact that I've been short on technical content have arrived from various sources. I have also been learning a lot lately thanks to the efforts of Jake Howlett and his super flex introduction. As I've had a lot of xml to work with as well as a lot of varied development, the following short list of new tools you should give a look came to mind today.

Aptana - The community edition is powerful enough to be very useful for CSS, Javascript and more. I used it extensively for a month or so with the built in jquery support. It comes with a pro version 30 day trial and that, so far as I can tell, buys you a few more project start up options and JAXTER (server side JavaScript) support but the community edition is powerful enough. Based on Eclipse.

XMLPad - there is a simple xml pad type tool available for windows, which is what I was looking for when Google turned this up. I downloaded it and love it for a quick, light-weight, XML browser and editor. Very useful for analyzing and editing static XML or browsing its structure. Free!

DiffMerge - from SourceGear, more use for analysis than coding, but very very nice for comparing files and folders to find those elusive little errors. Free!

It goes without saying that I've been playing with FlexBuilder (60 day trial) as well. Will probably be getting the pro version through work as it's definitely not free. But, that brings to mind that most of these new IDEs are taking advantage of the Eclipse base. Running Same Time in Eclipse, plus Flex Builder 3, plus Aptana (I run Notes in basic mode as the eclipse version has yet to win me over) is too much for my lowely 2 GB laptop. Adding more memory probably would help, but Win XP only will make use of 3 GB from what I hear. So, memory utilization has to be considered when loading up a lot of new tools, which is why I like the smaller specific purpose tools so much over the larger general purpose IDEs. Note that with Aptana you can download it as an eclipse plugin to add to your existing instance of Eclipse, but your mileage may vary. I'm not sure what performance would be like with Domino + Aptana or Flex Builder + Aptana plugin.


Design: Messaging Queues and the UI

Originally published 

Mon 7 Sep 2009

Editors Notes
Some of my old blog posts are less practical tip than they are just thinking out loud.  While most of these centered around Lotus Notes, as it was at one time my bread and butter, there are still concepts that I think are useful in other contexts, so a few of those articles will appear here, as this one does.

Original Post

A number of years ago I wrote an article for e-Pro Magazine treating the topic of emulating a messaging queue with standard Domino technology. That first article was aimed at handling simultaneous data access from multiple users against individual documents. Recently I again used the concept of a messaging queue to handle a tricky problem, this time on the Notes Client UI.

Let's just cover what a messaging queue is and is supposed to do, briefly. A messaging queue is just a place where individual instructions are dropped by one or more processes to be picked up, in order received, and executed by another. When you consider the steps involved in queue management, you may have as many processes as you like adding messages to the queue (analogous to an array.push) but must have only one process removing messages (array.pop). This is also known as a mutex.

Despite the appearance of multi-threading running more than one program at once, it is usually just thread management through a mutex to process one instruction set at a time, albeit rather quickly. This last bit is somewhat central to the concept of a messaging queue, but not critical to the implementation I discuss below - not entirely. You see, for a messaging queue to work properly, a message has to be cleared once it is processed. If you leave the message on the queue, it can be reprocessed ad-infinitum causing all manner of mayhem. I mention this mostly because I have encountered some attempts at using this architecture that used a latent or come-along process to clear messages and that is an approach fraught with problems.

Happily, our example today is simplistic by comparison. To make things simple, I'm using a single instruction queue. Only one message can be in the queue at a time. If another message comes in before the instruction in the queue is processed, the last instruction in is processed and the previous instruction is discarded as it is overwritten with the new. This is useful for a UI, as you will see.

In Use...

I have an application with a series of framesets. In one frame is an embedded IE Browser object occupying the entire frame. This means controls to interact with it can not be on the same form and in fact live in a separate form in a separate frame positioned much like a browser control bar above the embedded browser's frame. If I want the user to be able to "click" back or forward or reload, I have to have a way to tell the OLE object to execute these internal commands. However, scripting frame to frame via LS is not supported in a straight forward way.

To achieve this frame to frame interaction, I took my single instruction queue and manifested it as a variable in the notes.ini file. This is a safe, quick, local and global access point for a short string. In my browser form, I have a timer running once a second to examine the "queue" for a new message. If the variable has a command I deemed to mean "do nothing", the timer's processing function exits. If it finds a command though, placed there by clicking on the control bar, it interprets the instruction and calls the desired function on the locally scoped OLE Browser.

The browser executes this in a serial fashion, then returns control to the calling function, which as a final step, returns the queue to the "do nothing" state.

Pros and Cons

I like this approach from the perspective that it allows the user to click "back" and then quickly decide they meant "reload" and the browser will reload instead of go back because it only takes the last command. It also gets around less elegant approaches like using "set focus" and "send key" command, or even windows messaging queue manipulation (which I considered and consequentially drew inspiration from).

I don't like this approach because the LS timer can't get any quicker that 1 second - which means the user has a maximum command lag of 1 second depending on when they click the button and when the timer next calls it's function. But, it has made for a much more reliable connection than other methods explored.

Other Ideas

Combining JavaScript and LotusScript was a heavily considered idea. But, for all that you can do in the client with JavaScript, OLE automation doesn't appear to be one of the available options - at least not that I could tell. I would love to find out this were not true as a JavaScript timer would be 1000 times more sensitive, potentially. Also, JavaScript in the client is able to jump frames.

Dispatching messages directly to the windows messaging queue was another attractive option as you get thread priority response time (what ever priority the browser is given) but there is a lot of work to do to get the proper window handle to dispatch a message to. I looked at this very closely and gave up trying to tunnel down through the Client hierarchy to get an ambiguously referenced embedded object windows handle. Just too iffy. Again though, I'd love to see where someone had nailed this process down as it would mean much more control over the object (just not Refresh as that's not among the standard windows messages and is more of a context / menu verb that needs to be invoked).

And, happy Labor Day.

XML: Web Based Tools

Originally published 

Thu 2 Jun 2011

Editors Notes
XML is a prevalent artifact on the internet, though you may not always find it.  Here are some of the tools I find useful when I must work with it.

Original Post


I've made use of these frequently when working with ad-hoc web services.

The XSL Tryit Editor is great for seeing the HTML result of your XSLT using your XML, or the XML and XSLT provided. XSLT Tryit Editor v 1.0

Of course, everyone needs to have a handy bookmark for the XML Validator for simply and quickly checking to see that you have closed your nodes. XML Validator

Depending on what you're doing, you may come to need an XSD / Schema generator. It takes your XML and makes an XML based description of it that can be consumed by web services to validate the inbound XML matches the spec. XSD Generator

Update 11/16/2011
Another tool, not XML related but equally useful for web work: the Base64 Decoder helps when you're working with Base64 values.

Excel: Export as HTML with Formatting

Originally published 

Mon 16 Jan 2012

Editors Notes
Here's a small tip for making those automatically generated Excel Spreadsheets a tad prettier.

Original Post
Exporting to Excel from ASP or LotusScript is a nice, old, trick which has a lot of utility for web based applications. One challenge I'd had recently was that the data should come out formatted as dollars, but was stored as longs. I found the following info regarding how to inject formatting style commands into the style tag of the table cells.

 <td  STYLE="vnd.ms-excel.numberformat:$* #,##0">

That produces the desired dollar formatting. Some other options:


Thousands, number, with 2 decimals:
vnd.ms-excel.numberformat:#,##0.00_)[semicolon](#,##0.00);

Dollar, showing thousands, two decimals, black.
vnd.ms-excel.numberformat:$* #,##0.00_)[semicolon][Black]$* (#,##0.00);

As Text:
vnd.ms-excel.numberformat:@;

Number:
vnd.ms-excel.numberformat:0

Decimal:
vnd.ms-excel.numberformat:0.00

European two digit year:
vnd.ms-excel.numberformat:dd/mm/yy

European 4 digit year:
vnd.ms-excel.numberformat:dd/mm/yyyy

U.S. Date format
vnd.ms-excel.numberformat:mm/dd/yyyy

AJAX: Progress Bar Management

Originally published 

Tue 28 Aug 2012

Editors Notes
The key to effective web design is effective communication between two sets of important parties.  The most important parties are the people your website is trying to reach, and you, the website owner.  There is a separate relationship here though - your website and its readers.  While the website is the tool you use to communicate, the tool is only effective if it communicates well with the readers.  To that end, you'll find a number of tips like the following here that add some minor decoration or indicator to help the reader or application user understand what the application is trying to tell them.

Original Post


Using some form of "working" or "busy" indicator with ajax calls is nice for the user experience. It lets folks know something is happening and can prevent multiple button clicks that otherwise might not be desirable. There are a couple of ways I've handled this need and I like them both for different reasons and situations.

Looping Animated
You can find a number of looped .gif or .swf busy indicators on the web. There's even a website that will generate them for you. I particularly like Ajaxload.info because they provide full control over the appearance and color of the animation and a number of styles to choose from. And it's free!
I like this sort of indicator for predictably quick response times and will insert it at the beginning of an ajax call into an area that is being populated with the ajax data results, and then replace it with the data when done. It give the use a good visual "hint" as to where to look for the action to take place so they aren't hunting around the screen for whatever it was updated when the ajax call completed.
There are times though when a looping animation doesn't convey the right information.

JQuery UI Progress Bar
JQuery UI has a nice progress bar widget that you can theme to an extent by choosing different JQuery UI theme packages. It is pretty basic but can be combined with text to make a very informative feedback point. You can show it, set it's position, and hide it. All you need really.
The way I found most useful to use it helps get around the way ajax calls work. We pretty much fire off the request and wait for a response. We don't know if the server is making any progress. Most of the time it is, but we're left to guess and wait. To help bridge this gap in user feedback, I've started using a setInterval call in Javascript to increment the progress bar 1 tick every 300 ms, and set it to 0 when I get to 100. This way, the progress bar shows when the request is initiated, "runs" while the request is processing at the server, and is stopped when the ajax success method is called.
For the user, this feedback is golden and very communicative. They are not left to wonder if things are hung up or working at all. More elaborate processes that involve many ajax calls can instead set relative positions on the progress bar as results are returned (in a non synchronous series) when appropriate and I've used this as well. Both approaches help tell the most important story: "things are happening in response to your mouse click".

Here's some sample Javascript.
// some functions for shortcutting calls to the progress bar - all require jquery


function pset(val){
 $('#progressbar').show();
 $('#progressbar').progressbar({value:val}); 
}

function pshow(){
 $('#progressbar').progressbar();
 $('#progressbar').show();
}

function phide(){
 $('#progressbar').fadeOut();
}

// some higher level workers for use with above

// the running progress bar with setInterval
function doPBStart(){
 pshow();
 pset(p);
 try{
  clearInterval(timer);
 } catch (e) {}
 timer = setInterval(function(){setP()},300);
}

function setP(){
 p++;
 p > 100 ? p=0 : p=p;
 pset(p);
}

function doPBEnd(){
 pshow();
 pset(100);
 phide();
 clearInterval(timer);
 p=0;
}


JavaScript: Array Object Extensions

Originally published 

Wed 13 Feb 2013

Editors Notes
There are some fine frameworks these days for JavaScript, but sometimes you want something to work a specific way, and that's fine.  That would be an adequate description of this particular gem.

Original Post
I've culled a few, and written a few, JavaScript Array object extensions to make working with Arrays a bit more complete.  There are plenty of base functions worth understanding, but the following so far requires prototyping out some extended functions.



// extends js base array object using jquery to add a diff function
// arrayA.diff(arrayB)
Array.prototype.diff = function(a) {
    var items = new Array();
 items = $.grep(this,function (item) {
     return jQuery.inArray(item, a) < 0;
 });
 return items;
};
// extends js base array object to add "unique" function
Array.prototype.unique = function( b ) {
 var a = [], i, l = this.length;
 for( i=0; i<l; i++ ) {
  if( a.indexOf( this[i], 0, b ) < 0 ) { if (this[i] !== '') {
   a.push(this[i]);
  } }
 }
 return a;
};
// extends js base array object to add "trim" function - removes empty elements
Array.prototype.trim = function(){
 var a=this;
 var b = [], i, l = this.length;
 for ( i=0; i< l; i++){
  if(a[i]!==''){
   b.push(a[i]);
  }
 }
 return b;
}

Array.prototype.drop = function(f){
 var a = this;
 var b=[], i, l= a.length;
 for (i = 0; i < l; i++) {
  if (a[i]!==f){
   b.push(a[i]);
  }
 }
 return b; 
}

Array.prototype.StringFilter = function(f){
 var a=this;
 var b = [], i, l = this.length;
 for ( i=0; i< l; i++){
  if (a[i].indexOf(f)>0){
   b.push(a[i].split(f).join(''));
  }
  else
  {
   b.push(a[i]);
  }
 }
 return b;
}

Scraped out of the corners of the internet...

This is the customary first blog post of a new blog.  In this case, the blog is the West Liberty Tech Blog, a nexus of old articles published once upon a time on Datatribe Softwerks, Ltd, reprinted here with permission from the author (me).  This is as much a place to file tips and tricks and things I have learned as it is a place to connect with local tech talent in West Liberty, Ohio.

If you live in Logan or Champaign Counties, in Ohio of course, and need a local short term technical assist with software, old or new, or hardware, shiny, dusty or archaic, I (or we) might be able to help.  The "we" comes in to play when I apply my ability to diagnose a problem and find a person to help resolve it.

With any luck, there will be much to read here in the near future.  My initial plan is to dump a selection of old articles, tips and tricks that remain relevant here and then resume my technical blogging that I gave up roughly 2 years ago*.  I just need a place to keep useful information, and I've never regretted sharing what I learn.

Stay tuned!

*I ran two blogs, Datatribe Softwerks, Ltd, and The Lanced Boil for about 9 years and wrote over 900 posts.  Few stand up to the test of time because they were fluff pieces related to what technology I was playing with that was cutting edge at the time (most of it is no longer so) or had to do with topics just not even remotely related to technology.  The resulting "curated" entries I have added here are a slim and slender slice of the volumes I wrote during that time, but they provide some lingering utility and so were worth carrying forward.