Wednesday, December 5, 2018

Generating Complementary Color Pairs with JavaScript

When using a library, such as Chart.js, you will sometimes want to assign colors to elementt, such as bars or pie wedges, in a functional way so that n items have unique colors.  Chart.js in particular allows you to provide a fill and a border color, each as an array in length equal to your dataset.

For the purpose of creating pleasant colors with "half tone" complements I recently rolled a few helper functions based on an example for generating random rgba color strings.  I hope they are useful to you.

 // biased towards the upper end of the brightness spectrum 
 function random_rgba() {
           var o = Math.round, r = Math.random, s = 100;
           var rgba = [];     
           rgba.push(o(r()*s)+155);
           rgba.push(o(r()*s)+155);
           rgba.push(o(r()*s)+155);
           rgba.push(1);
           return rgba;
       }
      
       function dimRGBA(rgba){
              var tone = .7;
              var dimtone = [];
              dimtone.push(Math.abs(rgba[0] * tone));
              dimtone.push(Math.abs(rgba[1] * tone));
              dimtone.push(Math.abs(rgba[2] * tone));
              dimtone.push(1);
              return dimtone;
       }
      
       function stringifyRgba(rgba){
              return 'rgba(' + rgba[0] + ',' + rgba[1] + ',' + rgba[2] + ',' + rgba[3] + ')';
       }
      
       function rgbaComplementaryPair(){
              var pair = [];
              var sourcecolor = random_rgba();
              var complement = dimRGBA(sourcecolor);
              pair.push(stringifyRgba(sourcecolor));
              pair.push(stringifyRgba(complement));
              return pair;
       }

Thursday, September 27, 2018

Tomcat Manager Dark Theme

If you use Tomcat as a web server, you know the manager console isn't much to look at.  It's adequate, but not pleasant.

As a developer, I enjoy a lot of the dark themes for things like eclipse and other editors . So, to prevent blindness when switching to the tomcat management console, I created a set of stylesheet overrides you can use in Chrome with the Stylish plugin. It offers a little more user friendliness by highlighting the table row for the application you are about to interact with when you click Stop, Undeploy or Reload .  There are also some UI hints for hovering over controls.

Please enjoy, and feel free to comment with suggested improvements.

input[type="submit"]{background-color: #919090!important; color: #823d01!important}

input[type="submit"]:hover{background-color: #737373!important; color: #ffb829!important; border-color: #555555!important; cursor: pointer}


body, table {background-color: #332a1f!important; border: 0px solid!important}

table{border-collapse: collapse!important; border-color:#866635!important}

button, input {background-color: #333333!important; border: 1px solid white!important; border-radius: 5px; color: #ffb300}

input:hover{background-color:#aaaaaa!important;color:#aa6900!important}



tr {background-color: #777777!important}

tr:hover{background-color: #919191!important} 
tr:hover a {color: #333333}


td {background-color: rgba(0,0,0,0)!important; color:inherit!important}

a{background-color:rgba(0,0,0,0)!important}

*{color:white}

a{color: #ffb300}

tr:hovor{border-color: #aaaaaa!important; background-color:#999999!important; color:#aa6900!important}

img {height:32px;border-radius:20px;border:#a8a8a8 2px solid}

Tuesday, March 27, 2018

Upgrading Performance with a Solid State Drive

You want to receive the best performance from your computer, but you may not want to have to spend a lot of money to get it.

One simple way to increase performance is to add an inexpensive SSD drive to host your most heavily used programs and files.  Normally, just copying files around between hard drives on Windows produces broken file short-cut links and invalid registry entries.  You could go track all of those down and edit them by hand, or you could use symbolic links (or hard links) to tell Windows that the files have moved.  Then, any time a file is requested from the old location, Windows sees it at the new location, as though it is still in the old location.  

Hard Links, a feature in Windows since version 7, makes this task much easier than uninstalling and reinstalling your programs or editing registry entries by hand.  Here's an example from my computer.  I hand noticed that loading my desktop and many documents I use regularly had become slow on the spinning hard disk, so this made my User Profile directory the first candidate for migration.

First, I used Windows explorer to simply drag and drop the folder from one place to another.  Then, launching the Command Window as Administrator (find it by hitting the Windows key and then typing "CMD", right click on the short-cut that appears and select "Run as Administrator"), I issued the following command:


C:\Users\JC>mklink /J "C:\Users\JC\Documents" "G:\Users\JC\Documents"
Junction created for C:\Users\JC\Documents <<===>> G:\Users\JC\Documents

This produced immediate benefits. For starters, when I start windows and login, my Desktop appears almost immediately.  Opening any files that I may have been working on, such as large CAD or Sketchup files, has extremely low lag. What used to take many seconds is almost now instantaneous. 

This is a powerful feature for getting more useful life out of your ageing systems.  While we wait for Intel and AMD to re-engineer their chip sets to exclude the vulnerabilities published early this year, $40 - $50 spent on a solid state drive is a tenth or twentieth what you would pay for a full system upgrade at this time.  


Bench Notes:

Something I noticed, however, when I began copying a large folder containing around 79 GB of data, was that about 30% of the way in, the data transfer rate topped out around 35.1 MB/second and then began slowly falling. I deduced that the chips responsible for I/O were getting hot, increasing resistance, and slowing data transfer.  So I fired up SpeedFan, a tool for tuning the speed of your variable speed on board fans, and it immediately increased the RPM of one internal fan.  Over the next several seconds, the data transfer rate rose from 34.5 to 38.4 before slowly declining again.  While I don't know for sure that the heat build up was slowing the data transfer rate, SpeedFan did report that the physical hard disk was a desiccating 124F. 

Also, in my case, I sacrificed having a connected DVD drive for the addition of the SSD drive due to a lack of SATA cables. If you order an SSD, make sure you order a connection cable set as well.  You'll need one for power and one for data or a combo connector.  Take a peak at your mother board to determine what you need or have a trusted service technician do this for you.

A word on backups: always have a backup solution in place for your important data.  While SSD drives are now a mature technology, when an SSD drive fails, the data is almost always lost unless you have a skillful electrical engineer with some experience in repairing them handy.  I recommend the freeware app "Create Synchronicity" for scheduling backups, and suggest you have a home NAS (Network Attached Storage) somewhere on premises to serve as an archive.  

Thursday, January 25, 2018

Understanding @Autowired

The life of a full stack developer is an adventure full of new things to learn daily.  This week, after using some templates for about a year based in Spring Boot, I finally came to understand what is going on with @Autowired.

Spring Boot, if you use it for Java development, provides a very clean and easy to use dependency injection model that falls under the @Autowired annotation.  It’s "easy to use", that is, once you wrap your head around top-down dependency injection and what it means for instantiated classes.  The trick is starting with the SpringWebConfig class, where your instantiated classes will no longer be created with the “new” operator, but annotated wth the @Bean annotation.

In SpringWebConfig.class:
@Bean
public MyClass getMyClass(){
       return new MyClass(); // the one place you call new
}

What @Bean buys you is a registration into the Spring context loader, making it available throughout all other classes annotated with @Component.  The downside for this is that if you want to use any @Autowired dependency in a class, such as Environment, you have to register that class as a bean and then auto wire an instance of the bean wherever you would have instantiated the class, as with a Test class or main execution class like a Controller or Main.  This means classes will be eagerly loaded, creating slightly longer start up times for applications, but less code to write and manage.

Update: I neglected to point out that a benefit of using beans in this way is that they are effectively all singletons. This means they will exist in memory once, which enforces tidy memory management and can improve your application performance at run time.  A small recent anecdote: migrating from using jdbc connections in just a handful of classes to using a bean to register a single Spring JdbcTemplate reduced our database open connections from 117 to less than 20.  While this means less memory consumed on your application server, it also frees up database server resources that would otherwise be spent maintaining so many connections.

The trick here is to remember to avoid things like...

MyClass myClassInstance = new MyClass(); // this will cause your autowired instance to throw a null pointer exception.

...and prefer instead something that seems a bit more complex at first glance, but ties lots of things together as if by magic. In the places you would use your class, such as in a test class, access the context scoped class instance this way:
@ContextConfiguration(loader = AnnotationConfigWebContextLoader.class, classes ={SpringWebConfig.class})
public class TestClass{
                @Autowired
                private MyClass myClassInstance;  // at run time, this will be instantiated by the SpringWebConfig class and will be scoped to TestClass
 
                @Test
                public void testTheClass(){
                                assertNotNull(myClassInstance); // success!
               } 
}
This does change the way you'll work with constructors.  Passing values to constructors doesn't work well in this model.  Prefer instead to write methods in your classes that accept configuration parameters, if needed.

You can read much more about Spring annotations here or check out the official reference guide.

Monday, January 15, 2018

SQL-fu: Modify a Data Bearing Table

I was faced with a challenge: add a column to a table.  It sounds simple enough, but the table already had data in it, and had an Identity column, so care had to be taken to preserve data as well as identities as they were used as part of a key on another table.  After discussing possibilities with my DBA, we came up with the following approach.

1.       Copy the main table to a backup table on the same database.
2.       Drop the original table
3.       Create the modified table structure including Identity declaration
4.       Turn on Identity Insert so that columns usually protected and written only by the server can be written from the backup data, thus preserving the identities
5.       Insert the backup data, sans new column, into the new table
6.       Update the new table with default values for the new column (optional)
7.       Turn off identity insert
8.       Drop the backup table

You can run these steps one at a time to confirm they are working properly, or run them all at once provided you break the queries into separate work units with the GO statement, otherwise you’ll be attempting to write to columns that don’t exist at compile time.

Here’s a sample SQL script that accomplishes the above task.

SELECT * INTO BACKUP_MYTABLE from MYTABLE
GO
DROP TABLE MYTABLE
GO
CREATE TABLE MYTABLE
(
                oldIdentityCol int NOT NULL IDENTITY(1,1),
Oldcolumn1 varchar(10),  -- as appropriate to your original data structure
Newcolumn int
)
Go
SET IDENTITY_INSERT MYTABLE ON
GO
INSERT INTO MYTABLE (oldIdentityCol, oldColumn1,) SELECT oldIdentityCol, oldColumn1 from BACKUP_MYTABLE
GO
SET IDENTITY_INSERT MYTABLE OFF
GO
UPDATE MYTABLE SET NEWCOLUMN = 1 where NEWCOLUMN IS NULL – optionally populate your new colum
GO
Drop TABLE BACKUP_MYTABLE
GO
 

Sunday, November 19, 2017

Decimate Data with JavaScript's Filter Function

In 2011, the Javascript standard added several new array functions that greatly simplify the process of working with large datasets.  One of the challenges when working with "big data" is when you go to chart that data, rendering every individual data point can seriously hamper the performance of your chart, increasing render times or locking your browser for a while, or crashing.

As I skimmed through several internet postings on different elaborate methods for decimating data (that is, reducing the resolution of a dataset through the application of some function), I realized that the filter function would perform all the work easily.

Consider an array of values:

var a = "4.901712886872069,4.905571030847647,4.909414346738851,4.913242948087607,4.91705694713669,4.92085645484947,4.92464158092928,4.928412433838424,4.932169120816823,4.93591174790032,4.939640419938632,4.94335524061298,4.947056312453386,4.950743736855651,4.954417614098028,4.958078043357581,4.961725122726249,4.965358949226622,4.968979618827425,4.972587226458727,4.976181866026878,4.97976363042917,4.983332611568248,4.986888900366257,4.990432586778738,4.9939637598082856,4.997550054547602,5.001123533723662,5.004684288602818,5.008232409479947,5.011767985692191,5.015291105632452,5.018801856762651,5.022300325626761,5.025786597863601,5.029260758219422,5.032722890560263,5.036173077884096,5.039611402332775,5.043037945203758,5.046452786961651,5.049856007249539,5.053247684900134,5.05662789794673,5.059996723633982,5.063354238428486,5.066700518029206,5.070035637377705,5.07335967066822,5.076672691357564,5.079974772174871,5.083265985131165,5.086546401528796,5.0898160919706985,5.0930751263695155,5.096323573956563,5.099561503290659,5.102788982266802,5.106006078124715,5.109212857457251,5.112469599977525,5.115715770546765,5.1189514375797165,5.122176668829165,5.125391531394446,5.128596091729822,5.131790415652724,5.134974568351865,5.138148614395217,5.141312617737873,5.144466641729777,5.147610749123334,5.150745002080902,5.153869462182165,5.156984190431394,5.160089247264595,5.163184692556541,5.166270585627705,5.169346985251077,5.172413949658882,5.175471536549192,5.178519803092442,5.181558805937842,5.184588601219694,5.187609244563616,5.190620791092663,5.193623295433372,5.1966168117216975,5.199601393608879,5.202577094267201,5.205543966395682,5.208556759340414,5.211560502621755,5.214555250442706,5.21754105652075,5.220517974093625,5.223486055925034,5.2264453543102425,5.229395921081616,5.232337807614062,5.235271064830402,5.238195743206656,5.241111892777258,5.2440195631401885,5.246918803462037,5.249809662482995,5.252692188521762,5.255566429480405,5.258432432849123,5.261290245710962,5.264139914746451,5.266981486238185,5.269815006075326,5.272640519758055,5.275458072401956,5.278267708742335,5.281069473138485,5.283863409577888,5.286649561680356,5.289427972702121,5.292198685539861,5.295011909539312,5.2978172415063565,5.300614725596722,5.303404405596596,5.306186324926734,5.308960526646518,5.311727053457957,5.314485947709622".split(',');

Now let's say I want to decimate that so that a small status chart only ever shows around 10 data points.  More or less are fine as long as I'm not continuing to show more data as I add several more batches to this initial set.  The following filter function uses the modulo operator to make a simple reduction function.

a.filter(function(elem,index,array){return index % (Math.round(array.length/10)) ==0;});

...yields a set of 10 data points, one for every time the array index of the value divided the dynamic value  array.length/10 is equal to zero.  Math.round simply makes our dynamic value a whole number, which works well with modulo (%) against the integer index numbers in the array.

"4.901712886872069,4.950743736855651,4.997550054547602,5.043037945203758,5.086546401528796,5.128596091729822,5.169346985251077,5.208556759340414,5.246918803462037,5.283863409577888"

Now any charting we may do with this data will continue to tell the story, sacrificing resolution in exchange for performance.

Thursday, November 3, 2016

New Tools, New Skills

Every so often in my career, I find that I have been using a set of tools for a long enough period of time that they have been surpassed by the market.  For me, this has been Eclipse for Java development, and SVN for code check-in and check-out.  All the cool kids are using IntelliJ and git these days, so with some time off between contracts, I have spent the week getting familiar and functional with these tools.  I wish I had done so sooner.

Even with Eclipse at release Neon, it still has some ways to go to catch IntelliJ.  While the environments have very dissimilar setups, some conventions are alike enough that a few tutorials is all it takes to get up to speed.  IntelliJ does more for the developer and in doing so saves a lot of time.  Eclipse, continues the tradition of an open ended environment made better through many many plugins.  It doesn't get too opinionated but that sometimes leaves the way forward for a particular path a little less clear.  IntelliJ, by contrast, seems to have a slightly more opinionated view of how things should be done, so while there is much to learn to move from one to the other, in reality, there isn't as much to learn to become productive.

SVN, a long time bacon-saver for many developers, including myself, is a central repository system.  While it is simple to understand and use, it is often the case that if you want to fork and merge code, the work is not as straight forward as it otherwise could be.  You can check out from a certain branch, but merging has always felt like surgery to me.  Git, and specifically the very nice website built around it, Git-hub, makes this a bit more clear and automated.  While the git console paired especially with certificate base authentication can be cryptic and frustrating, using Git-Hub and the integrated support in IntelliJ-IDEA makes it as painless to use as SVN, but much more painless when it comes to merging branches.

Another tool I have been longing to embrace for some years now is JUnit.  I didn't even know what assertive testing was a few years ago.  I've always written my own test code but never thought much beyond automating the calling of my API to make sure things did what I expected.  There are a whole bevy of testing techniques that go much more beyond this and I've picked them up one at a time as client driven work has allowed.  While I've been trying to get a recent client to let me re-baseline a few projects taken in from the off-shore labs and built them as Test Driven Development projects from the ground up, I have now had the time to do this on my own and find the experience to be gratifying.  Knowing that you are thinking about your code from a testing perspective puts new mental focus on lean code that follows the DRY and SOLID approaches.

This meshes very well with my shift to daily workouts this year.  Self discipline, I find, suites me, and using these technologies has made that both easier and more meaningful in terms of improving the consistency and reliability of my code.  I'm looking forward to future projects and contracts, and expect the future to bring more improvements to the tools we love and use.