Tuesday, March 27, 2018

Upgrading Performance with a Solid State Drive

You want to receive the best performance from your computer, but you may not want to have to spend a lot of money to get it.

One simple way to increase performance is to add an inexpensive SSD drive to host your most heavily used programs and files.  Normally, just copying files around between hard drives on Windows produces broken file short-cut links and invalid registry entries.  You could go track all of those down and edit them by hand, or you could use symbolic links (or hard links) to tell Windows that the files have moved.  Then, any time a file is requested from the old location, Windows sees it at the new location, as though it is still in the old location.  

Hard Links, a feature in Windows since version 7, makes this task much easier than uninstalling and reinstalling your programs or editing registry entries by hand.  Here's an example from my computer.  I hand noticed that loading my desktop and many documents I use regularly had become slow on the spinning hard disk, so this made my User Profile directory the first candidate for migration.

First, I used Windows explorer to simply drag and drop the folder from one place to another.  Then, launching the Command Window as Administrator (find it by hitting the Windows key and then typing "CMD", right click on the short-cut that appears and select "Run as Administrator"), I issued the following command:


C:\Users\JC>mklink /J "C:\Users\JC\Documents" "G:\Users\JC\Documents"
Junction created for C:\Users\JC\Documents <<===>> G:\Users\JC\Documents

This produced immediate benefits. For starters, when I start windows and login, my Desktop appears almost immediately.  Opening any files that I may have been working on, such as large CAD or Sketchup files, has extremely low lag. What used to take many seconds is almost now instantaneous. 

This is a powerful feature for getting more useful life out of your ageing systems.  While we wait for Intel and AMD to re-engineer their chip sets to exclude the vulnerabilities published early this year, $40 - $50 spent on a solid state drive is a tenth or twentieth what you would pay for a full system upgrade at this time.  


Bench Notes:

Something I noticed, however, when I began copying a large folder containing around 79 GB of data, was that about 30% of the way in, the data transfer rate topped out around 35.1 MB/second and then began slowly falling. I deduced that the chips responsible for I/O were getting hot, increasing resistance, and slowing data transfer.  So I fired up SpeedFan, a tool for tuning the speed of your variable speed on board fans, and it immediately increased the RPM of one internal fan.  Over the next several seconds, the data transfer rate rose from 34.5 to 38.4 before slowly declining again.  While I don't know for sure that the heat build up was slowing the data transfer rate, SpeedFan did report that the physical hard disk was a desiccating 124F. 

Also, in my case, I sacrificed having a connected DVD drive for the addition of the SSD drive due to a lack of SATA cables. If you order an SSD, make sure you order a connection cable set as well.  You'll need one for power and one for data or a combo connector.  Take a peak at your mother board to determine what you need or have a trusted service technician do this for you.

A word on backups: always have a backup solution in place for your important data.  While SSD drives are now a mature technology, when an SSD drive fails, the data is almost always lost unless you have a skillful electrical engineer with some experience in repairing them handy.  I recommend the freeware app "Create Synchronicity" for scheduling backups, and suggest you have a home NAS (Network Attached Storage) somewhere on premises to serve as an archive.  

Thursday, January 25, 2018

Understanding @Autowired

The life of a full stack developer is an adventure full of new things to learn daily.  This week, after using some templates for about a year based in Spring Boot, I finally came to understand what is going on with @Autowired.

Spring Boot, if you use it for Java development, provides a very clean and easy to use dependency injection model that falls under the @Autowired annotation.  It’s "easy to use", that is, once you wrap your head around top-down dependency injection and what it means for instantiated classes.  The trick is starting with the SpringWebConfig class, where your instantiated classes will no longer be created with the “new” operator, but annotated wth the @Bean annotation.

In SpringWebConfig.class:
@Bean
public MyClass getMyClass(){
       return new MyClass(); // the one place you call new
}

What @Bean buys you is a registration into the Spring context loader, making it available throughout all other classes annotated with @Component.  The downside for this is that if you want to use any @Autowired dependency in a class, such as Environment, you have to register that class as a bean and then auto wire an instance of the bean wherever you would have instantiated the class, as with a Test class or main execution class like a Controller or Main.  This means classes will be eagerly loaded, creating slightly longer start up times for applications, but less code to write and manage.

The trick here is to remember to avoid things like...

MyClass myClassInstance = new MyClass(); // this will cause your autowired instance to throw a null pointer exception.

...and prefer instead something that seems a bit more complex at first glance, but ties lots of things together as if by magic. In the places you would use your class, such as in a test class, access the context scoped class instance this way:
@ContextConfiguration(loader = AnnotationConfigWebContextLoader.class, classes ={SpringWebConfig.class})
public class TestClass{
                @Autowired
                private MyClass myClassInstance;  // at run time, this will be instantiated by the SpringWebConfig class and will be scoped to TestClass
 
                @Test
                public void testTheClass(){
                                assertNotNull(myClassInstance); // success!
               } 
}
This does change the way you'll work with constructors.  Passing values to constructors doesn't work well in this model.  Prefer instead to write methods in your classes that accept configuration parameters, if needed.

You can read much more about Spring annotations here or check out the official reference guide.

Monday, January 15, 2018

SQL-fu: Modify a Data Bearing Table

I was faced with a challenge: add a column to a table.  It sounds simple enough, but the table already had data in it, and had an Identity column, so care had to be taken to preserve data as well as identities as they were used as part of a key on another table.  After discussing possibilities with my DBA, we came up with the following approach.

1.       Copy the main table to a backup table on the same database.
2.       Drop the original table
3.       Create the modified table structure including Identity declaration
4.       Turn on Identity Insert so that columns usually protected and written only by the server can be written from the backup data, thus preserving the identities
5.       Insert the backup data, sans new column, into the new table
6.       Update the new table with default values for the new column (optional)
7.       Turn off identity insert
8.       Drop the backup table

You can run these steps one at a time to confirm they are working properly, or run them all at once provided you break the queries into separate work units with the GO statement, otherwise you’ll be attempting to write to columns that don’t exist at compile time.

Here’s a sample SQL script that accomplishes the above task.

SELECT * INTO BACKUP_MYTABLE from MYTABLE
GO
DROP TABLE MYTABLE
GO
CREATE TABLE MYTABLE
(
                oldIdentityCol int NOT NULL IDENTITY(1,1),
Oldcolumn1 varchar(10),  -- as appropriate to your original data structure
Newcolumn int
)
Go
SET IDENTITY_INSERT MYTABLE ON
GO
INSERT INTO MYTABLE (oldIdentityCol, oldColumn1,) SELECT oldIdentityCol, oldColumn1 from BACKUP_MYTABLE
GO
SET IDENTITY_INSERT MYTABLE OFF
GO
UPDATE MYTABLE SET NEWCOLUMN = 1 where NEWCOLUMN IS NULL – optionally populate your new colum
GO
Drop TABLE BACKUP_MYTABLE
GO
 

Sunday, November 19, 2017

Decimate Data with JavaScript's Filter Function

In 2011, the Javascript standard added several new array functions that greatly simplify the process of working with large datasets.  One of the challenges when working with "big data" is when you go to chart that data, rendering every individual data point can seriously hamper the performance of your chart, increasing render times or locking your browser for a while, or crashing.

As I skimmed through several internet postings on different elaborate methods for decimating data (that is, reducing the resolution of a dataset through the application of some function), I realized that the filter function would perform all the work easily.

Consider an array of values:

var a = "4.901712886872069,4.905571030847647,4.909414346738851,4.913242948087607,4.91705694713669,4.92085645484947,4.92464158092928,4.928412433838424,4.932169120816823,4.93591174790032,4.939640419938632,4.94335524061298,4.947056312453386,4.950743736855651,4.954417614098028,4.958078043357581,4.961725122726249,4.965358949226622,4.968979618827425,4.972587226458727,4.976181866026878,4.97976363042917,4.983332611568248,4.986888900366257,4.990432586778738,4.9939637598082856,4.997550054547602,5.001123533723662,5.004684288602818,5.008232409479947,5.011767985692191,5.015291105632452,5.018801856762651,5.022300325626761,5.025786597863601,5.029260758219422,5.032722890560263,5.036173077884096,5.039611402332775,5.043037945203758,5.046452786961651,5.049856007249539,5.053247684900134,5.05662789794673,5.059996723633982,5.063354238428486,5.066700518029206,5.070035637377705,5.07335967066822,5.076672691357564,5.079974772174871,5.083265985131165,5.086546401528796,5.0898160919706985,5.0930751263695155,5.096323573956563,5.099561503290659,5.102788982266802,5.106006078124715,5.109212857457251,5.112469599977525,5.115715770546765,5.1189514375797165,5.122176668829165,5.125391531394446,5.128596091729822,5.131790415652724,5.134974568351865,5.138148614395217,5.141312617737873,5.144466641729777,5.147610749123334,5.150745002080902,5.153869462182165,5.156984190431394,5.160089247264595,5.163184692556541,5.166270585627705,5.169346985251077,5.172413949658882,5.175471536549192,5.178519803092442,5.181558805937842,5.184588601219694,5.187609244563616,5.190620791092663,5.193623295433372,5.1966168117216975,5.199601393608879,5.202577094267201,5.205543966395682,5.208556759340414,5.211560502621755,5.214555250442706,5.21754105652075,5.220517974093625,5.223486055925034,5.2264453543102425,5.229395921081616,5.232337807614062,5.235271064830402,5.238195743206656,5.241111892777258,5.2440195631401885,5.246918803462037,5.249809662482995,5.252692188521762,5.255566429480405,5.258432432849123,5.261290245710962,5.264139914746451,5.266981486238185,5.269815006075326,5.272640519758055,5.275458072401956,5.278267708742335,5.281069473138485,5.283863409577888,5.286649561680356,5.289427972702121,5.292198685539861,5.295011909539312,5.2978172415063565,5.300614725596722,5.303404405596596,5.306186324926734,5.308960526646518,5.311727053457957,5.314485947709622".split(',');

Now let's say I want to decimate that so that a small status chart only ever shows around 10 data points.  More or less are fine as long as I'm not continuing to show more data as I add several more batches to this initial set.  The following filter function uses the modulo operator to make a simple reduction function.

a.filter(function(elem,index,array){return index % (Math.round(array.length/10)) ==0;});

...yields a set of 10 data points, one for every time the array index of the value divided the dynamic value  array.length/10 is equal to zero.  Math.round simply makes our dynamic value a whole number, which works well with modulo (%) against the integer index numbers in the array.

"4.901712886872069,4.950743736855651,4.997550054547602,5.043037945203758,5.086546401528796,5.128596091729822,5.169346985251077,5.208556759340414,5.246918803462037,5.283863409577888"

Now any charting we may do with this data will continue to tell the story, sacrificing resolution in exchange for performance.

Thursday, November 3, 2016

New Tools, New Skills

Every so often in my career, I find that I have been using a set of tools for a long enough period of time that they have been surpassed by the market.  For me, this has been Eclipse for Java development, and SVN for code check-in and check-out.  All the cool kids are using IntelliJ and git these days, so with some time off between contracts, I have spent the week getting familiar and functional with these tools.  I wish I had done so sooner.

Even with Eclipse at release Neon, it still has some ways to go to catch IntelliJ.  While the environments have very dissimilar setups, some conventions are alike enough that a few tutorials is all it takes to get up to speed.  IntelliJ does more for the developer and in doing so saves a lot of time.  Eclipse, continues the tradition of an open ended environment made better through many many plugins.  It doesn't get too opinionated but that sometimes leaves the way forward for a particular path a little less clear.  IntelliJ, by contrast, seems to have a slightly more opinionated view of how things should be done, so while there is much to learn to move from one to the other, in reality, there isn't as much to learn to become productive.

SVN, a long time bacon-saver for many developers, including myself, is a central repository system.  While it is simple to understand and use, it is often the case that if you want to fork and merge code, the work is not as straight forward as it otherwise could be.  You can check out from a certain branch, but merging has always felt like surgery to me.  Git, and specifically the very nice website built around it, Git-hub, makes this a bit more clear and automated.  While the git console paired especially with certificate base authentication can be cryptic and frustrating, using Git-Hub and the integrated support in IntelliJ-IDEA makes it as painless to use as SVN, but much more painless when it comes to merging branches.

Another tool I have been longing to embrace for some years now is JUnit.  I didn't even know what assertive testing was a few years ago.  I've always written my own test code but never thought much beyond automating the calling of my API to make sure things did what I expected.  There are a whole bevy of testing techniques that go much more beyond this and I've picked them up one at a time as client driven work has allowed.  While I've been trying to get a recent client to let me re-baseline a few projects taken in from the off-shore labs and built them as Test Driven Development projects from the ground up, I have now had the time to do this on my own and find the experience to be gratifying.  Knowing that you are thinking about your code from a testing perspective puts new mental focus on lean code that follows the DRY and SOLID approaches.

This meshes very well with my shift to daily workouts this year.  Self discipline, I find, suites me, and using these technologies has made that both easier and more meaningful in terms of improving the consistency and reliability of my code.  I'm looking forward to future projects and contracts, and expect the future to bring more improvements to the tools we love and use.

Sunday, December 20, 2015

Retooling with Abstraction

Below you will find a presentation I prepared over a year ago for a customer who was contemplating replacing their entire software back bone, moving from one legacy full stack to another.  The reasons for their contemplating this are ultimately not material to the discussion, except in that it provided the catalyst for my own thought processes.  I could see momentum behind the "do something" mantra was building and sought to help them avoid rushing off the cliff in a way that would result in huge disruptions.

My slide deck was intended as an introduction to both the MVC design pattern and software abstraction as a concept, presenting it at a time when they would most benefit from adopting the sort of approach it represented.  Ultimately they took a similar path to the one I outlined, but with a critical difference in that they moved to more SASS systems rather than build their own.

If you have questions, please feel free to ask me.  I've helped large companies do this sort of transition on many platforms - the tools themselves are not as important as the way they are employed.

The Case for Retooling and Abstraction

Feel free to share this if it makes the conversation with your management or stakeholders easier.  I only ask that you share it from the source link so that I have some idea as to how widely it is used.

Monday, December 14, 2015

Free File Recovery Tool: PhotoRec


CG Security has a free tool for photo and file recovery called PhotoRec.

http://www.cgsecurity.org/wiki/PhotoRec

You can get it for most file systems as a stand-alone tool which will run in a command-line style interface.

I used it this weekend to recover files for a friend that went missing after his upgrade to Windows 10.  So, a note about that - if you have any data outside your home path, usually C:/users/yourname, you will lose it when you upgrade to Windows 10 unless you take steps to back it up.  

In my friend's case, the files were still on his computer, by the grace of God, and not actually over written with new data by the installation of Windows.  

I will make a guess that if you're reading this post, you've lost some files (or more likely have a friend who has) so let's review a couple of things everyone should do before the walk-through.

1. Always back up your data.  Put on another computer, a server, a cloud service such as Windows One Drive, or DropBox or Google Drive or set up a home backup server.

2. Use an automated tool to make your file backups, wait for it..., automatic.  There's nothing worse than having spent money or time on a backup solution that provides no benefit because you forgot to use it.

3. On Windows, as with Linux, User files should always be kept in the user's home directory.

So, a quick how-to for Photo-Rec, in this case, on a Windows Laptop that offered 2 USB ports.

1. Download the tool to a USB drive from which you will run it.

2. scrounge up a couple of extra USB drives for recovered data.

3. Boot the system and plug in the USB drives.

4. Run Photo Rec, follow the default selections for the most part, and then navigate to your second USB drive and use it as the target for recovering your data.

It's pretty much that simple.  I suggest, though, investigating the file filters before you run the tool.  It will find everything that hasn't been totally obliterated.  Narrowing the search filter will save you time and reduce huge amounts of false positives from winding up in the recovery folders created by the tool.  You will still need to review the recovered data and cherry pick the things you wanted.

We were looking for office documents in our example and found a surprising number of things that were not exactly office documents in our recovered files folder.  You can easily spot the valid files by looking for complete file names and turning on the Author's column in Windows Explorer.  You can also use Windows Search on the recovery target USB drive with advanced options to search for text within the recovered files.  

While this process is pretty easy for a technically capable person, it does require some experience to pull off without making matters worse. If you need a hand, leave a comment - I'm happy to help for a reasonable fee.