Wednesday, January 30, 2019

Coming Up to Speed with Angular 6/7

Over the course of the past few months, I've been making a study of current JavaScript UI frameworks. Angular presently holds the lead as the best fit for my primary client's enterprise environment, so I've been working through a few things to make this a manageable, extensible, quickly deployed solution for a wider swath of development needs. This is a brief discussion of the high-points of that journey. While I'm mostly only taking advantage of Angular 6 functionality, I've been using the Angular 7 CLI to make sure I've got the latest dependencies in the applications I am building right from the start.

POC

The first thing I like to do after nailing down the tool chain for any new technology I am learning is to create a Proof-of-concept implementation. In this case, we had a simple 3 page survey tool that was used once a year for an internal campaign. For the new year, they wanted to add a leader board with bar charts and add email notifications.

Translating the current work from .jsp based web pages to a modular design took a lot of effort, but less time than I thought it would. The first big surgical decision was to separate the UI and the back end web application that provided the APIs driving the user experience. This decision played well into the next section of this article when it came time to break things out into libraries.

The form this separation took, within my Maven project, was that of a multi-module project. Maven is a dependency management and build tool that makes combining external dependencies with your own code a pretty straight-forward, declarative affair. By extension, then, making your own modules should follow roughly the same approach. It does, to an extent, but what you inherit is the responsibility for creating the packaging for your own sub-modules -- a task usually transparent to you when you consume externally provided dependencies.

So in this case, I had separate Maven build files for the API, the UI, and a Web module which simply provided the Web Archive (.war) file packaging at the end of the build. With this arrangement, a web developer can maintain the UI and a back-end engineer can maintain the Spring API separately without excessive concerns for one another's activities.

The POC was a success, and it also created the opportunity to begin identifying and isolating our own unique Angular components as reusable libraries.

MAKING COMPONENTS

Abstraction, if you're not familiar with the term, is the practice (art / science) of breaking a system down into representational components that serve a single purpose in a generic enough fashion as to be useful in many different scenarios. Selecting a function to extract from my POC for this treatment was not my biggest challenge. Getting the process nailed down, from an Angular world view, was what required the most work.

Angular provides external library components to you through NPM. NPM is the Node Package Manager and is by default pointed to a global internet library of Node Modules. If you want to publish your own modules to NPM, you have to sign up for an account. For an enterprise environment, this may not be the best approach. In some cases, components you develop are going to be useless outside of the environment you are working in. In others, they may represent portions of intellectual property that can not be shared. So I had the challenge of working out how to do this without the NPM repository.

The first step was to create a library project. There are several guides available online for this. What this process taught me was the need to carefully decouple my angular components. While Angular makes it super easy to add components to a project with the Angular CLI, you want to be careful what you do with them after that step in terms of how they are connected to parent components. While it is possible, for example, to make components out of form elements, turning one of those elements into a standalone library means you won't necessarily have the parent context while developing and testing that library. Creating them in a way that they have a minimum of interaction with the parent component, then, predisposes them to future extraction into standalone libraries.

The next thing was figuring out how to both put the library under source control and provide a path to being able to package and install it into another future application. This involved three steps.

First, only the "lib" folder in the library component is checked in. Checking in the surrounding test harness is not required.

Second, creating an on-board packaging script for the library allows future users to build it into a .tgz locally after they clone the repository.


Third, using an NPM feature to install from a local file path, the .tgz can be installed as though it comes from the global NPM repository.


This seems like a lot of work, but the important thing here is that it keeps the source code for the library open for extension and modification within the enterprise. I want future developers to be able to fork this library to make their own components or branch the code and possibly submit pull requests when they fix bugs. By keeping the library in a raw, source code format, we get the best features of using packaged libraries and reusable code combined.

NEXT STEPS

Going forward, I have a couple of targets I want to hit with my Angular learning. I have a request to expand the libraries that are documented and tested - essentially create a loose standard for us to follow within the enterprise, so expanding my knowledge of existing external components is going to be a major theme in the coming months. Secondly, I have been delving into Android development on the side. Angular, through Native Script, promises to expedite the process of developing tools that can work on multiple platforms, including Android, so I'll be looking at that.

RESOURCES
Here are some of the resources I've found thus far that have been useful to this pursuit:

Making custom libraries

https://angular.io/guide/creating-libraries

https://medium.com/@tomsu/how-to-build-a-library-for-angular-apps-4f9b38b0ed11

More in-depth example for making libraries
https://blog.angularindepth.com/creating-a-library-in-angular-6-87799552e7e5

https://blog.angularindepth.com/creating-a-library-in-angular-6-part-2-6e2bc1e14121

Saturday, January 12, 2019

JavaScript Frameworks and Your Business

Being a small business owner who does a lot of subcontracting on long term contracts, I have both the freedom and professional need to stay on top of new technology trends.  Sometimes this need becomes very pressing, like when I know or suspect a contracting gig may be ending, and at other times work makes such a huge mental demand that I don't have time to keep up where I would like to.

JavaScript frameworks are one of those areas where I wish I could have kept up better over the last 4 or 5 years.  My focus instead had been on systems integration and J2EE.  UI, while a sometimes important part of what I was doing, was in most of those cases a secondary concern, and while I was joyed to adopt Bootstrap and W3.css, they are really about decorating the UI, not frameworks. 

JavaScript Frameworks: Basic History

The idea of frameworks goes way back.  I don't know the precise origin but started getting involved with them when prototype.js, scriptaculous and ext.js were in their infancy, even contributing to a few bug fixes on prototype and ext through my connections with developers on those.  Early frameworks focused on reducing the amount of code you wrote.  Forks of ext.js included complete makeovers for some UI's generated by things like IBM Lotus Domino (notorious for its broken and ham-fisted HTML rendering). 

These two ideas, reducing coding effort by creating simplified and powerful boilerplate code, and redrawing the UI through DOM manipulation for better User eXperience (UX) continued to evolve over time, giving rise to things like ember.js, knockout.js and others.  All the while, the focus was moving away from just scripting the UI and towards more abstract concepts and solid design patterns.  Knockout.js, for example, introduced binding that worked in a way that made keeping the UI view and its underlying model in sync much much easier.

Modern Frameworks

Present day, 2019, Angular and React have become the dominant frameworks.  They build on a legacy of prior art while also reinventing and reconsidering the design challenges web application developers have faced over time.  Maintaining the state of an application from one screen or panel to another, handling server side requests for data, ensuring cross-browser or even cross-platform compatibility and deciding how to divide up logic and UI are just a few such considerations. 

Angular, created by Google leverages another open source tool, Typescript.  React, created by Facebook is a less opinionated framework that nonetheless offers a lot of benefits.  Sitepoint published a very good article that I recommend to anyone trying to choose between the two.  Importantly, it offers a conclusion I very much agree with as it meshes solidly with my view: don't choose one.  I believe, as others do, that there is a right tool for each job, and that it is often not the tool you have used the most or most recently.  Fresh analysis of the needs and environment of a project should be undertaken each time you embark on a new effort.

The Probable Future

Presently, both React and Angular offer CLI's to expedite and standardize the bootstrapping of a new project.  (I rather like the very structured approach Angular takes).  Code generation is just enough to get you started.  After that, you have some learning to do.  But, the everyday tasks we perform as application developers are largely the same.  The details are what make the biggest difference in each implementation, but we often enough repeat work done by others.  Mixins (in the case of React) and continued contributions to libraries and add-ons for both frameworks follows the same general trend of software development in other technologies, that being one of continued modularization,  code generation, standardization and ever more complete and competent frameworks. 

A merging of platforms is also underway where projects like React Native and Native Script provide the ability to take these primarily web browser focused frameworks and create native applications for both Android and iOS devices.  The number of different skill sets needed to create something with a broad audience is declining as time goes on.  However, the tools and platforms we build these solutions upon continue to improve and evolve through natural growth and innovation.  So too, the frameworks and tools we employ to make more utility out of less time and fewer people will evolve and change in step or just behind these platforms.  However, the core challenges of providing value to customers will remain, and that means good business requirements analysis will be an in-demand skill for some time to come. 

It also means that some skills will have more longevity in the market than others, though the details of how they are implemented will continue to change.  UI design continues to change subtly as customers move from the desktop and tablets to smaller presentation form factors on mobile devices.  Application development, though, is rapidly changing.  So much so that within the span of of 3- 6 months, a projects base code might need to be reevaluated from a dependency perspective as changes and updates to the ever increasing number of external dependencies projects have continues at a very brisk pace.  What was the best way to do something last year is out of date and clumsy now.  Core design principles remain important, however.  Even as we see a near constant flow of changes in tooling and libraries we may take from the open source community and commercial markets, understanding what reliable and maintainable designs looks like, and how to avoid common pitfalls remain foundational skills that do not age out.

All Things Considered

The challenge for a modern provider of software design and development solutions, then, is to balance time learning and updating skills with providing lasting value and value-added services to customers.  My primary concern for any client is always that the time they pay me for results in cost savings or profits.  We should be striving to optimize systems so that less time is wasted, efficiency of systems improves, friction of transactions decreases, and the investment in IT as a whole has a clear and measurable impact on the bottom line. 

These opportunities only arise as often as businesses have the presence of mind to ask key questions, just like any software developer should ask: are we making the best use of our resources? Are we duplicating work? Are we spending disproportionate capital on a problem(addressing a 1% problem with more than 1% of our resources, for example)? How can we improve?  What can we do better, faster or cheaper?  How do we continue to improve quality and reduce our future cost of ownership?

And that's the sort of conversation I like to have with managers and business owners.  If you haven't had a conversation like that lately, drop us a line at: datatribe at gmail dot com. 

Frameworks, then, should be viewed as value added investments.  Adopting them has an initial cost in terms of developer ramp up, but the benefits to quality and speed of development in the future are quite clear from a technical standpoint.  It's the sort of investment that is on the same level as adopting a source code control solution and giving consideration to continuous integration or continuous development solutions. 

If you're a big enough business to have a dedicated IT department, these will be familiar concepts, or ought be, and are worth careful consideration.  If you area small business, what you need to know is the value delivered when a solution provider mentions the use of a framework such as Angular or React comes in the form of quicker initial results with faster future updates.  You should expect turnaround times to improve greatly over older ways of creating custom solutions.  This means your overall or initial cost will likely be lower in terms of man hours, allowing more capital to be diverted to other needs or folded back in to making better tools for your employees and customers.


Wednesday, December 5, 2018

Generating Complementary Color Pairs with JavaScript

When using a library, such as Chart.js, you will sometimes want to assign colors to elementt, such as bars or pie wedges, in a functional way so that n items have unique colors.  Chart.js in particular allows you to provide a fill and a border color, each as an array in length equal to your dataset.

For the purpose of creating pleasant colors with "half tone" complements I recently rolled a few helper functions based on an example for generating random rgba color strings.  I hope they are useful to you.

 // biased towards the upper end of the brightness spectrum 
 function random_rgba() {
           var o = Math.round, r = Math.random, s = 100;
           var rgba = [];     
           rgba.push(o(r()*s)+155);
           rgba.push(o(r()*s)+155);
           rgba.push(o(r()*s)+155);
           rgba.push(1);
           return rgba;
       }
      
       function dimRGBA(rgba){
              var tone = .7;
              var dimtone = [];
              dimtone.push(Math.abs(rgba[0] * tone));
              dimtone.push(Math.abs(rgba[1] * tone));
              dimtone.push(Math.abs(rgba[2] * tone));
              dimtone.push(1);
              return dimtone;
       }
      
       function stringifyRgba(rgba){
              return 'rgba(' + rgba[0] + ',' + rgba[1] + ',' + rgba[2] + ',' + rgba[3] + ')';
       }
      
       function rgbaComplementaryPair(){
              var pair = [];
              var sourcecolor = random_rgba();
              var complement = dimRGBA(sourcecolor);
              pair.push(stringifyRgba(sourcecolor));
              pair.push(stringifyRgba(complement));
              return pair;
       }

Thursday, September 27, 2018

Tomcat Manager Dark Theme

If you use Tomcat as a web server, you know the manager console isn't much to look at.  It's adequate, but not pleasant.

As a developer, I enjoy a lot of the dark themes for things like eclipse and other editors . So, to prevent blindness when switching to the tomcat management console, I created a set of stylesheet overrides you can use in Chrome with the Stylish Stylus plugin. It offers a little more user friendliness by highlighting the table row for the application you are about to interact with when you click Stop, Undeploy or Reload .  There are also some UI hints for hovering over controls.

Please enjoy, and feel free to comment with suggested improvements.

input[type="submit"]{background-color: #919090!important; color: #823d01!important}

input[type="submit"]:hover{background-color: #737373!important; color: #ffb829!important; border-color: #555555!important; cursor: pointer}


body, table {background-color: #332a1f!important; border: 0px solid!important}

table{border-collapse: collapse!important; border-color:#866635!important}

button, input {background-color: #333333!important; border: 1px solid white!important; border-radius: 5px; color: #ffb300}

input:hover{background-color:#aaaaaa!important;color:#aa6900!important}



tr {background-color: #777777!important}

tr:hover{background-color: #919191!important} 
tr:hover a {color: #333333}


td {background-color: rgba(0,0,0,0)!important; color:inherit!important}

a{background-color:rgba(0,0,0,0)!important}

*{color:white}

a{color: #ffb300}

tr:hovor{border-color: #aaaaaa!important; background-color:#999999!important; color:#aa6900!important}

img {height:32px;border-radius:20px;border:#a8a8a8 2px solid}

Tuesday, March 27, 2018

Upgrading Performance with a Solid State Drive

You want to receive the best performance from your computer, but you may not want to have to spend a lot of money to get it.

One simple way to increase performance is to add an inexpensive SSD drive to host your most heavily used programs and files.  Normally, just copying files around between hard drives on Windows produces broken file short-cut links and invalid registry entries.  You could go track all of those down and edit them by hand, or you could use symbolic links (or hard links) to tell Windows that the files have moved.  Then, any time a file is requested from the old location, Windows sees it at the new location, as though it is still in the old location.  

Hard Links, a feature in Windows since version 7, makes this task much easier than uninstalling and reinstalling your programs or editing registry entries by hand.  Here's an example from my computer.  I hand noticed that loading my desktop and many documents I use regularly had become slow on the spinning hard disk, so this made my User Profile directory the first candidate for migration.

First, I used Windows explorer to simply drag and drop the folder from one place to another.  Then, launching the Command Window as Administrator (find it by hitting the Windows key and then typing "CMD", right click on the short-cut that appears and select "Run as Administrator"), I issued the following command:


C:\Users\JC>mklink /J "C:\Users\JC\Documents" "G:\Users\JC\Documents"
Junction created for C:\Users\JC\Documents <<===>> G:\Users\JC\Documents

This produced immediate benefits. For starters, when I start windows and login, my Desktop appears almost immediately.  Opening any files that I may have been working on, such as large CAD or Sketchup files, has extremely low lag. What used to take many seconds is almost now instantaneous. 

This is a powerful feature for getting more useful life out of your ageing systems.  While we wait for Intel and AMD to re-engineer their chip sets to exclude the vulnerabilities published early this year, $40 - $50 spent on a solid state drive is a tenth or twentieth what you would pay for a full system upgrade at this time.  


Bench Notes:

Something I noticed, however, when I began copying a large folder containing around 79 GB of data, was that about 30% of the way in, the data transfer rate topped out around 35.1 MB/second and then began slowly falling. I deduced that the chips responsible for I/O were getting hot, increasing resistance, and slowing data transfer.  So I fired up SpeedFan, a tool for tuning the speed of your variable speed on board fans, and it immediately increased the RPM of one internal fan.  Over the next several seconds, the data transfer rate rose from 34.5 to 38.4 before slowly declining again.  While I don't know for sure that the heat build up was slowing the data transfer rate, SpeedFan did report that the physical hard disk was a desiccating 124F. 

Also, in my case, I sacrificed having a connected DVD drive for the addition of the SSD drive due to a lack of SATA cables. If you order an SSD, make sure you order a connection cable set as well.  You'll need one for power and one for data or a combo connector.  Take a peak at your mother board to determine what you need or have a trusted service technician do this for you.

A word on backups: always have a backup solution in place for your important data.  While SSD drives are now a mature technology, when an SSD drive fails, the data is almost always lost unless you have a skillful electrical engineer with some experience in repairing them handy.  I recommend the freeware app "Create Synchronicity" for scheduling backups, and suggest you have a home NAS (Network Attached Storage) somewhere on premises to serve as an archive.  

Thursday, January 25, 2018

Understanding @Autowired

The life of a full stack developer is an adventure full of new things to learn daily.  This week, after using some templates for about a year based in Spring Boot, I finally came to understand what is going on with @Autowired.

Spring Boot, if you use it for Java development, provides a very clean and easy to use dependency injection model that falls under the @Autowired annotation.  It’s "easy to use", that is, once you wrap your head around top-down dependency injection and what it means for instantiated classes.  The trick is starting with the SpringWebConfig class, where your instantiated classes will no longer be created with the “new” operator, but annotated wth the @Bean annotation.

In SpringWebConfig.class:
@Bean
public MyClass getMyClass(){
       return new MyClass(); // the one place you call new
}

What @Bean buys you is a registration into the Spring context loader, making it available throughout all other classes annotated with @Component.  The downside for this is that if you want to use any @Autowired dependency in a class, such as Environment, you have to register that class as a bean and then auto wire an instance of the bean wherever you would have instantiated the class, as with a Test class or main execution class like a Controller or Main.  This means classes will be eagerly loaded, creating slightly longer start up times for applications, but less code to write and manage.

Update: I neglected to point out that a benefit of using beans in this way is that they are effectively all singletons. This means they will exist in memory once, which enforces tidy memory management and can improve your application performance at run time.  A small recent anecdote: migrating from using jdbc connections in just a handful of classes to using a bean to register a single Spring JdbcTemplate reduced our database open connections from 117 to less than 20.  While this means less memory consumed on your application server, it also frees up database server resources that would otherwise be spent maintaining so many connections.

The trick here is to remember to avoid things like...

MyClass myClassInstance = new MyClass(); // this will cause your autowired instance to throw a null pointer exception.

...and prefer instead something that seems a bit more complex at first glance, but ties lots of things together as if by magic. In the places you would use your class, such as in a test class, access the context scoped class instance this way:
@ContextConfiguration(loader = AnnotationConfigWebContextLoader.class, classes ={SpringWebConfig.class})
public class TestClass{
                @Autowired
                private MyClass myClassInstance;  // at run time, this will be instantiated by the SpringWebConfig class and will be scoped to TestClass
 
                @Test
                public void testTheClass(){
                                assertNotNull(myClassInstance); // success!
               } 
}
This does change the way you'll work with constructors.  Passing values to constructors doesn't work well in this model.  Prefer instead to write methods in your classes that accept configuration parameters, if needed.

You can read much more about Spring annotations here or check out the official reference guide.

Monday, January 15, 2018

SQL-fu: Modify a Data Bearing Table

I was faced with a challenge: add a column to a table.  It sounds simple enough, but the table already had data in it, and had an Identity column, so care had to be taken to preserve data as well as identities as they were used as part of a key on another table.  After discussing possibilities with my DBA, we came up with the following approach.

1.       Copy the main table to a backup table on the same database.
2.       Drop the original table
3.       Create the modified table structure including Identity declaration
4.       Turn on Identity Insert so that columns usually protected and written only by the server can be written from the backup data, thus preserving the identities
5.       Insert the backup data, sans new column, into the new table
6.       Update the new table with default values for the new column (optional)
7.       Turn off identity insert
8.       Drop the backup table

You can run these steps one at a time to confirm they are working properly, or run them all at once provided you break the queries into separate work units with the GO statement, otherwise you’ll be attempting to write to columns that don’t exist at compile time.

Here’s a sample SQL script that accomplishes the above task.

SELECT * INTO BACKUP_MYTABLE from MYTABLE
GO
DROP TABLE MYTABLE
GO
CREATE TABLE MYTABLE
(
                oldIdentityCol int NOT NULL IDENTITY(1,1),
Oldcolumn1 varchar(10),  -- as appropriate to your original data structure
Newcolumn int
)
Go
SET IDENTITY_INSERT MYTABLE ON
GO
INSERT INTO MYTABLE (oldIdentityCol, oldColumn1,) SELECT oldIdentityCol, oldColumn1 from BACKUP_MYTABLE
GO
SET IDENTITY_INSERT MYTABLE OFF
GO
UPDATE MYTABLE SET NEWCOLUMN = 1 where NEWCOLUMN IS NULL – optionally populate your new colum
GO
Drop TABLE BACKUP_MYTABLE
GO