Thursday, March 24, 2016

Let's not blame NPM and modular development over a hissy fit

This week a developer who was pissed off at NPMJS.org pulled all his packages, including one little strange package left-pad that was a dependency for a frightening number of other packages and software projects, including React ( see http://www.businessinsider.com/npm-left-pad-controversy-explained-2016-3 )

Sunday, March 20, 2016

I can't help it - I love PostgreSQL!

Recently, in regards to a new project, a colleague asked me to justify my panning of MySQL in favor of PostgreSQL. While I wrote it to him personally, I think it stands on it's own and gives a nice breakdown of PostgreSQL vs. MySQL for those trying to make the same decision. I expect some flaming comments but it would be nice if flamers will include any references/proofs:

Monday, May 2, 2011

HTML5 and CSS3 are on!

Updated!
I think now it is time to revisit my predictions regarding browser support for HTML5 and CSS3. In the previous writing (see below), I noted that IE9 was poised to push out (significantly) non-conforming IE7 and IE8 within a period that I predicted to be a little more than a year. Of course, even then, IE6 was not even a concern as it is widely regarded as a dead browser - today it stands with approximately less than half of 1 percent of browsers currently in use around the world.

The importance of pushing out IE7 and IE8 were notable because of their lack of support for HTML5 and CSS3 while almost every other browser (particularly mobile browsers) do a fine job of supporting the new standard. If IE7 and IE8 will finally vanish from desktops we will be able to employ HTML5 and CSS3 without worrying *the hacks* that have plagued web design for the last decade. We will be able to design great looking websites with much better functionality and we will be able to do it easily and quickly.

So, how about some statistics? Has IE9 eclipsed both IE8 and IE7 as I had predicted?

Once again I am using statistics courtesy of StatCounter. Statistics can be misleading but for our purposes now we are just looking at general trends. The most important trend we can see from the chart is IE9's meteoric rise as it cannibalized IE8 through automatic updates from Microsoft. By June 2012, the year and little bit that I had predicted, share of IE9 had eclipsed all those of it's former versions combined.

During the same period Google Chrome also enjoyed a parallel stellar rise partially fueled by IE6 and IE7 users fed up with a crappy browser and no upgrade path. If you have Windows XP you are not supported by Microsoft and new browser additions are not available to you. In my opinion that is not only a shame for Microsoft but a strategic boo boo.

Then there is the matter of the steadily increasing mobile market with increasingly larger screens. I've had the chance to preview a few iPads and Android tablets and I have been so impressed with the browsing experience that is clearly just getting better with time. It is widely believed that people will eventually abandon desktop computers for tablets out of sheer convenience. The statistics agree but point out a long protracted battle:

It would be hard to deny the eventual death of the desktop but it will be a long time coming measured in many years and perhaps (although much less likely) in decades. In fact I doubt it will be tablets that spell the end to desktops and laptops. More likely we'll have glasses that superimpose data on our field of view and control will be through retina-tracking and voice command. Certainly by then we won't have to *hack* CSS to support crappy browsers from Microsoft. More likely is Ballmer will have finally buried Microsoft and instead we'll moaning about some other monopoly.

The only question remains is poised by those companies and web designers who think it is smart to try and please everybody by designing for each browser, no matter how clear and impending is their inevitable demise. I do my best to explain that supporting IE6/7/8 is pointless - a huge effort that will only be enjoyed by 15% of the browsing population today and dwindling rapidly.

It's a question of "who". Who are these people still using IE6/7/8 ? I insist that these users are the least likely of buyers, even if you bend over backwards to try to serve them. Here is my summary:

IE8 User - a person who has opted specifically to deny the IE9 update. They just really want to keep IE8 possibly because:
  • bandwidth - they live in the middle of nowhere or they are frugal or both
  • they are resistant to change
  • they are technically confused and/or challenged
IE7/6 User - a person who still uses Windows XP (or older) on some old desktop or laptop that is about to bite the biscuit. This person may be financially limited and/or frugal and/or confused and/or intellectually challenged.  Almost everyone has or knows an "aunt" or "grandfather" who insists their computer they bought in 1998 "still works fine dammit!"

Another subset of this group is Latin America where internet cafe's are the place where most young people get there dose of internet (these days its mostly facebook chatting, formerly MSN chatting). These internet cafes have largely pirate software, almost always Windows XP. Because they have pirate versions, they are not able to upgrade the browser and they version is actually only IE6. Because of this, I believe, you will find Google Chrome hugely popular in South America. But you can still find crusty little internet cafe's with creaky old computers running IE6 in small towns in this part of the world.

Note: there were (a few years ago) several small portable solid-state memory notebooks on the market that were kind of cool. They came with XP only because of the small footprint that didn't hog up the limited solid-state memory. Because there are no spinning parts in a hard-drive, the batteries last forever, they're small but still have a pretty good sized keyboard and screen and are awesome for traveling. But anyone owning one of those had the sense to install Chrome or Firefox.

When you compare the remaining IE6/7/8 users you find the following commonalities:

they are frugal (old?)
they are resistant to change (old?)
they are technically confused or challenged (old?)
they are financially limited (young? old? developing nation?)

The conclusion I draw is that these people do not buy things from the internet unless you are selling funeral services, collectible figurines, or ring-tones. In my experience, you are forced to double your expenditure to support design websites for these IE6/7/8 users. I insist you will not realize a benefit for the added effort unless you are specifically targeting older americans.

Instead, lets push this group to upgrade their buggy, security-flawed, unsupported browser to Firefox or Chrome or Opera or Safari! And now, lets design for HTML5 and CSS3. Remember also, Microsoft is coming out with IE10 which will certainly drive back IE8 within 14 months (another prediction that I will revisit - hopefully with good news that the last of the holdouts has converted). Imagine it this way - you spend a whole bunch of time and money *hacking* a website together to support IE8 and below and it all becomes irrelevant in 14 months. After 14 months all the *hacks* become useless and just slow your website down. Why would you do this to your new project? Don't you want to create a website that lasts technically and aesthetically? Of course you do! So lets just admit that HTML4 is dead and lets move on :)


The old article from May 2011


Today I received a windows update that included internet explorer version 9 as an important update. In years past that would have made me cringe however, since IE9 is now (perhaps), as standards compliant as Chrome or Firefox I have great reason to welcome this push. What this means is that all machines running versions since Vista (IE7) will now receive IE9 through the windows update mechanism.

By replacing these versions of IE7 and IE8, Microsoft is greatly affecting the percentage of computers that are HTML5 and CSS3 standards compliant (or close enough). As of May 1st, 2011, wikipedia reports IE usage numbers from statcounter at 42% http://gs.statcounter.com/#browser_version-ww-monthly-201004-201104

April 10 - 11 2011
browser share
IE42.14%
Firefox30.22%
Chrome19.85%
Safari5.17%
Opera1.94%
Mobile5.69%
This 42% is maddening because it is a major stumbling block for HTML5 and CSS3 adoption. But IE9 is a major improvement in support of the new standards. It's adoption will certainly eat into the existing share of IE7 and IE8 and, over time, eclipse them - just as IE7 stepped on IE6, then IE8 overtook IE7. Looking at the graph of brower version usage statistics we can see a distinct trend where one browser seems to overtake another in a period of approximately 1 - 2 years. Hard drives die and older computers are replaced with newer models - eventually the percentage of non-compliant browsers disappears, along existing trend lines, within a bit more than 1 year.

Source: StatCounter Global Stats - Browser Version Market Share


Today, taking into account the browsers that can support HTML5 and CSS3 (Firefox, Chrome, Safari, to-a-lesser-degree Opera, and about 50% of the mobile market with Android and Apple handhelds) we have approximately 60% of browsers capable of handling much of the standards.

If we extrapolate the trends of browser adoption to include IE9, we should see that percentage jump to near 80% within a year and perhaps 90% in 2 years. Fair enough, today only 60% of browsers can enjoy the snazziness of websites crafted using the new standards. But the new standards degrade gracefully - they are simply not seen in older browsers.

So big deal - that old IE6 browser on your auntie's aging desktop with the 14 inch monitor can't view the new fancy new rounded borders or gradient backgrounds or custom fonts - do you think she really cares? If she did, she would have bought a new laptop by now...

What is more important is that a growing majority of people will be able to view some slick new websites delivered fast without all the old hacks and image tricks that slow your site down. Why don't you ask yourself which is more important - to send out a slower website that chews bandwidth and doesn't necessarily look *that* great but it looks the same for everybody... Or, send out a site that is brisk to download and looks fantastic for those that actually give a damn and bother to update their browsers?

The people who don't care about updating their browsers probably don't even care about the plainer version of your site anyway. Besides, their hard drive is about to bite the dust next month and they'll be viewing your site a fancy new laptop or iPad anyway... let's just design HTML5 CSS3 websites now and recommend to users older non-compliant browsers to get an update or change over to a competitor's browser. Why not just suggest to these users that they can view your site in all it's glory by making a switch to a new browser? We can speed the process of HTML5 and CSS3 adoption if we do...

Tuesday, October 26, 2010

A little walk down memory lane : In defense and condemnation of Microsoft

I have always been a vocal proponent for the implementation of javascript in the client with the intent to push processing cycles out unto the client machines, to reduce the hurry-up-n-wait, to reduce bandwidth and load on the server and increase it's ability to handle more concurrent sessions, to realize the vastly scalable thick-client application.

Indeed, it was Microsoft's Internet Explorer version 4.0 (released in beta and final through 1997) that was turning point for me. With it's improved support of CSS, Dom, Javascript... the fundamental changes that integrated it with the OS and Windows domain security... the new functionality that distinguished itself (from Netscape) as the "real" browser.

But there was something beyond all those features, something that we take for granted today that really turned my crank, that really converted me into a serious Microsoft acolyte. It drove me to study like mad and achieve Microsoft certifications at a furious pace.

The little feature was called Remote Data Services (RDS), an Active-x object that you could embed in your html and access via [java]script. It could be used to fetch data from a server, via script, without requiring a refresh of the page and waiting for all the redundant data on a page to be retransferred over the puny pipes of the 33.8k modems available at the time. [Note: I have always been like Ebenezer Scrooge about data transfer since my formative years as a 16-year old kid driving my parents nuts running a Wildcat Dungeon and Dragons bbs in the evenings using a 300 baud accoustic coupler connected to my father's IBM 8086 - sloooooow.] It meant a new breed of applications could be built, delivered via the web, that would connect you to the world with the ease of simply visiting a webpage.

I found many uses for it whilst working for a large bank in Canada that (wisely) implemented Internet Explorer version 4.5 throughout their entire infrastructure. Mostly in updating parts of pages to display near real-time data or to populate components of a form as the user completed them. I cannot recount how many times I used RDS to populate select lists in a form as the user selected an item higher in the hierarchy; think Country - State/Provice - City - etcetera...

It was during this time I tried to coin my own term for this kind of application: the "rich client". It didn't stick with anyone and thankfully, someone else popularized "thick client" which was the perfect term for a concept that stood between "thin client" and "fat client". It is my feeling that we have Microsoft to thank for the innovations that allowed us to conceive and realize the "thick client". It wasn't perfect but it was a reasonable start that you could realize solutions with. But Microsoft didn't stop there.

They followed with Internet Explorer 5 that, in addition to improved CSS, javascript, and XML/XSLT, included a new ActiveX object called XMLHttpRequest to which many attribute the birth of AJAX (coined many years later). Microsoft's dominance in the browser wars was clear and deserved as they were the innovators of the day - paving the way to Web 2.0.

Today, Microsoft has taken years of beatings for their browsers that stood still and ignored innovating standards years later. Versions 6, 7, and to a lesser degree version 8 all forced designers to hack and double-design CSS for multiple browsers - a painful exercise and all Microsoft's fault for trying to write it's own standard.

Finally, in version 9, and in recognition of HTML5 (and the obvious opportunity to kick Flash out of the marketplace), and the realization that they have lost the battle for control of the standards helm, Microsoft has finally released a beta that complies with standards as much or more than it's competitors.

But, I warn you, do no let the Microsoft harassment end - there is still a vast installed base of versions 5.5, 6, 7, and 8. You'll find them on Gramma's computer or on the majority of computers hosting pirated copies of Windows (found largely in the developing world).

Microsoft has been very lax about updating the installed base of old browsers claiming that they should not be expected to update browsers on pirated versions of their software. I'm sure the hackers of the world rejoice daily that there still remains a huge base of infected or infect-able machines. Designers cry when usage numbers point to still significant numbers of these browsers in the market place and are forced by their employers to "support it all".

While I congratulate Microsoft for finally doing the right thing and properly supporting standards, and I continue to respect them for their past innovation towards what we now refer to as Web 2.0, I demand that they clean up the mess they left, pirated or not, and update older browsers to version 9 on all machines possible. This old mess is their sole responsibility.

Wednesday, March 18, 2009

Same Origin Policy Needs to Evolve : X-CROSS-DOMAIN

Same Origin Policy (SOP) is an important one - it protects us from fraudulent phishing and spoofing attempts where a malicious website or trojan can emulate a website, such as an online banking site, and gain access to usernames, passwords, and other sensitive data. This type of attack has been referred to as Cross Site Request Forgeries.

But there are some instances where a web service may desire it's web service data to be shared - we need a way to indicate (to browsers) that a given web page (or webservice) may cross domain boundaries with permission from the source. The problem is that SOP (needlessly?) hampers our ability to create mashups using XmlHttpRequest object. Instead, we need a new solution - a new policy.

Crockford has been pushing for JSONRequest, a new object that can already be seen in some browsers and, according to Crockford, doesn't require Same Origin Policy enforcement because JSON data is benign.

JSON, while wildly popular, is merely a subset of Javascript's Object Literal Notation - less being able to pass functions. Crockford has omitted functions from his vision of literal notation (JSON) in order to render it powerless to execute. And, when parsed and checked as valid data by the JSONRequest object thus, an exception to the Same Origin Policy because it is supposedly "benign". It is tantamount to stealing functionality, originally given to us by the designers of Javascript and I am not in favor of it.

Instead, imagine a a world where web-service operators could mark their webservice with the header X-CROSS-DOMAIN to allow mashups to use it freely. To cover their costs of transferring vasts amount of data, some web-service operators might wish to include advertising along with it's "free" data. It is not unreasonable to suggest that a web-service operator, who must incur the data transfer cost of delivering data to various mashups, might desire to be recompensed with some advertising. Passing a self-executing function within the object literal notation (as envisioned by the creators of Javascript) might give them the ability to pass a splash advertisement on the receiving web page.

I realize that no-one likes advertising but it is a model that we have accepted in so many other mediums for free content. Nothing is for free in this world - and if we can encourage web-services with valuable content to be shared en-masse, albeit perhaps with advertising, we improve the web.

So, we don't really need another data fetching object in Javascript - XmlHttpRequest is robust and fully supported in most browsers. We don't really need JSON - object literal notation already exists. We also do not need a new policy, Same Origin Policy makes sense to protect us from websites that might spoof other websites.

What we need are Same Origin Policy exceptions. The evolved policy must allow for web servers to supply an exception header that affectively says: "This document may cross domains" so that the "REST"ful webservices can be accessed using the methods intended instead of relying on only GET (why should we be reduced to using GET when we want to POST or DELETE or PUT? Let's realize Tim Berners Lee's original vision of the web and not get side-tracked into all sorts of strange meanderings - remember SOAP?!?)

The browsers could start by implementing this exception when the expected header is found (using a HEAD request), thus allowing the content. Then application developers can make use of the header in most server handler environments. Later, server software like IIS, Apache, TomCat, etc, can allow server administrators to explicity specify which files and folders are cross domain permittable (overriding any handlers made by developers).

To illustrate it clearly, I would first like to propose the use of the non-standard header: X-CROSS-DOMAIN

The data associated with the header may be a URL for which the cross domain exception is made, or simply the text 'all' to indicate the document may cross to all domains. Further on, there might be an allowance for multiple domain listings, separated by commas. Furthermore, we might block or blacklist URLs using the minus sign in front of the offending URL. Examples might appear like this:

X-CROSS-DOMAIN: *
- requests from any domain may be permitted

X-CROSS-DOMAIN: http://reinpetersen.com, http://perukitemancora.com
- requests from reinpetersen.com or perukitemancora.com may be permitted

X-CROSS-DOMAIN: http://blog.reinpetersen.com/2008/10/interface-in-javascript.html
- requests from the specific URL http://blog.reinpetersen.com/2008/10/interface-in-javascript.html may only be permitted

X-CROSS-DOMAIN: *, -http://hackthissite.org
- all requests may be permitted except hackthissite.org

For this to work, browser support for this header must exist first. The support would be built on the XmlHTTPRequest object. Once the support in browsers exists, web developers will be able to customize their (JSON or XML) webservices to include the X-CROSS-DOMAIN header in the response in order to allow their data to be used (effectively) in mashups around the web. Mashups can continue to use the existing and robust XMLHttpRequest to GET,PUT,POST,DELETE,HEAD, and TRACE data located on various servers around the web.

As mentioned, later on, web server software can eventually include server administrators in the process, allowing them to assign X-CROSS-DOMAIN policy exceptions through standard security interfaces instead of programmatic-ally through code.

A similar measure (in intent), albeit unnecessarily verbose and potentially a massive security hole, was adopted after I wrote this article but before I published it. This article is an evolution of one I wrote many years ago [ http://slashdot.org/journal/137429/A-solution-for-cross-site-scripting ] that is no longer hosted - this article responds to some alarming trends that should be I feel should be "nipped-in-the-bud"

In version 3.5 (and greater) of Firefox, a new Request header (not a Response header such as I am proposing here) called 'Origin' permits XMLHttpRequest object to obtain cross-site data leaving the onus on the web server to enforce it (should there be need for a blacklisting of a given website). You can read more about it here: https://wiki.mozilla.org/Security/Origin and the original draft at: http://tools.ietf.org/html/draft-abarth-origin-05 and the Cross-Origin Resource Sharing working draft it was based on at: http://www.w3.org/TR/2009/WD-cors-20090317/

Because Same Origin Policy is enforced in the browser, and not at the Server, we cannot remove the enforcement at the browser unless the Server tells us this is alright. Neither can we assume that Web servers will adopt this fully, if even at all. From my experience, having written small http servers and handlers, is that there are many (1000's perhaps 10,000's) web-services that will have to re-written to support the Origin request header to deny cross-origin requests when this support was already supposed to be guaranteed by Same Origin Policy supported in all browsers.

In version 3.5 of Firefox, a hacker who has installed a trojan on your computer or has somehow lured you to his site with a crafty URL (perhaps in an email), can spoof any website that hasn't already retooled its web server to handle the Origin specification (in draft status and yet to be accepted). All because a web-service operator wanted to share his web-service to the world but didn't have the wherewithal to program a denial of service response based on Origin for unwanted abusers of the service in timely manner..

Granted, it seems Mozilla has supported the white-listing Server response header Access-Control-Allow-Origin but this is still a problems as it only a whitelist - there is no way to blacklist specific offenders. Thus, if you want to share your resource to many but you have one bugger doing dasterdly deeds with your data, you'll have to white-list everyone sharing your data - potentially a big list.

Furthermore, pushing the responsibility of deciding what can be served based on the Origin request header data onto the server is extraneous. The client (browser) is fully capable of determining what exceptions are possible based on the server's list of exceptions (or non-exceptions) to Same Origin Policy. Remember, we are trying to push processing out to the leaf nodes in this age of increasing computing power in the hands of users and increasing demand on servers.

While attempting to solve the same problem, Mozilla has followed a route (CORS) that is rather backwards in regards to sharing resources across domains. The onus remains on the browser to maintain Same Origin Policy and the only exceptions that should be made should be dictated by Servers, ideally through a header as I have demonstrated in this article.

What the 'Origin' header really does is force web servers to process the header and decide if it can share the resource or not. Rather, the solution should be an 'opt-in', as I have demonstrated with X-CROSS-DOMAIN, where a web server can claim a given resource may be freely mashed (or specifically denied) into other websites (perhaps with advertising intact) and the browser can loosen the Same Origin Policy only when it has this permission within the header of a resource.

Thursday, December 4, 2008

A work-around for a common Inheritance problem in Javascript

Inheritance in Javascript is a well-documented concept found on the internet. There are quite a few ways to do it but most solutions generally involve prototyping in Javascript. But there are caveats and to understand them, I will demonstrate how Inheritance is typically achieved (through JavaScript prototyping).

Javascript, arguably, supports a form that mimics Class Inheritance through the use of the prototype property accessible to all objects. Remember that in JavaScript, everything is an object, even a function. If you choose to define your "class" using a function, it is actually an object. JavaScript does not support classes but we are still able to model our scripts in an object oriented manner that is good for code reuse, fast execution, and legibility. So first, let us define our "class" (emulation) that will let us extend the GMap2 "class" found in the Google Maps API:

function GMap2Extension(){}
GMap2Extension.protytype = GMap2;

That is all there is to defining and deriving one "class" from another in JavaScript. It is actually a function, which in JavaScript is also an object. But it behaves like a class in most respects in that it has a constructor that can accept arguments and may define public, privileged, and private properties and methods. [Class emulation in JavaScript is beyond the scope of this article but you can find out more about it in another article I will be posting soon or elsewhere on the net.]

That said, again there are a few caveats to watch out for. In this article I will demonstrate a particular problem that relates to extending (sub-classing or inheriting from) JavaScript "classes". The first problem is that the base "class" (function) ((object)) is expecting arguments to be passed into the constructor and may fail (error) without them. So, we need to supply the arguments (parameters) to the the base "class" constructor by rewriting our GMap2 extension "class" (function) ((object)):

function GMap2Extension() { GMap2.apply(this, arguments); }

In the above example we have used the 'apply' function of the GMap2 to do this for us. The 'arguments' parameter is a property available all objects that supplies an object similar to a dictionary. It is simply a dictionary-like object containing all the arguments presented GMap2Extension "class" (function) ((object)). The 'this' parameter is a special JavaScript keywords that refers to the object in-scope, in this case, it will be a GMap2Extension instance (when the constructor is called with the 'new' keyword.

You'll notice I did not supply named parameters in the definition of the GMap2Extension "class" (function) ((object)). In JavaScript, a loosely-typed language, it is unnecessary but here it is with the parameters defined:

function GMap2Extension(container, opt_opts) { GMap2.apply(this,arguments); }

The parameters 'container' and 'opt_opts' simply mirror those found in the GMap2 function definition. If you wanted to overload the constructor to allow you to pass extra arguments for processing in your own constructor, you might compose the function as follows:

function GMap2Extension(message,container,opt_opts) {
this.message = message;
GMap2.apply(this,{"container":container,"opt_opts":opt_opts});
}

Next, we'll examine more closely the use of the prototype property, available to every object in JavaScript, to extend the "class". The prototype property accepts an object and, since a function is an object and we use functions to emulate classes, we can pass a function to the prototype property. JavaScript will copy all the properties from the object (function) (("class")) assigned to the prototype property to itself:

GMap2Extension.prototype = GMap2;

Ok, so it seems our GMap2Extension "class" is ready to go - we'll just instance it using the 'new' keyword and assign it to a variable (once all the objects required to be passed into the constructor have been created):

var mapelement = document.getElementById("mymap");
var mymap = new GMap2Extension("welcome to my map",mapelement); //opt_opts is optional

Everything was looking good, until we execute. It errors. It seems the properties and methods within the GMap2 class cannot be found and there is a good reason for this. Google has wisely placed private and privileged methods and properties within the constructor of their important objects to ensure that there are no accidental (or malicious) usage of these special members.

The only problem is, the subclass, with it's prototype set to GMap2, what is actually stored is an object returned from it's constructor. The base (GMap2) object stored in the prototype no longer has access to it's constructor because it is out of scope and now, the object stored in the prototype property no longer has access to those special members and it cannot complete it's work - it only has access to the public properties.

Another option is to supply the prototype with a new instance of the GMap2 "class". While that may work in other instances, in the case of GMap2, it may be constraining having to know in advance which element will be passed and to not have the freedom to change that. This option is not our solution if we need the freedom to change the map element.

An alternative is, while continuing to use prototyping, we will pass on prototyping the constructor and opt just to prototype the various properties of the GMap2 object:

GMap2Extension.prototype = GMap2;
for(var prop in GMap2.prototype) GMap2Extension.prototype[prop] = GMap2.prototype[prop];

This will give us access to all the properties of the (implicit) base class. The shortcoming with this solution is that a GMap2Extension instance will never evaluate to being an 'instanceof' a GMap2 object - regrettable and unfortunate. However, in a language like JavaScript, duck-typing ("if it walks like a duck...") is pretty common and can give you an edge in other ways that strict-typing cannot.

But we're not satisfied with that - we want to know in our code what base a subclass is derived from. Fortunately, there is another way. Using closures in JavaScript ( a function within a function) we can keep the private and privileged members found in the constructor of the base class alive long after the original constructor has gone out of scope and been garbage collected - as long as we maintain references to the closures. We'll do this with an global function:

function extend(subclass, base)
{
function Closure(){}
Closure.prototype = base.prototype;
subclass.prototype = new Closure();
subclass.prototype.constructor = subclass;
subclass.base = base;
}

function GMap2Extended(message,element)
{
GMap2Extended.base.call(this, element);
this.enableContinuousZoom(); // the GMap2 methods are available
this.mymessage = message;
}

extend(GMap2Extended, GMap2);
var mymap = new GMap2Extended("My map!",document.getElementById("map"));

alert(mymap instanceof GMap2); // alerts true

This works but I wasn't happy with what seemed to be an unnecessary global function. I felt it should be tucked away in a better place and, since Function is also an object, I decided that was where the extend method should reside:

Function.prototype.Extends = function(base){
function Closure(){}
Closure.prototype = base.prototype;
this.prototype = new Closure();
this.prototype.constructor = this.constructor;
this.base = base;
}

You'll notice that there is a new property added: 'base'. For the time being, and until JavaScript version 1.91 is found commonly on most browsers, we are stuck keeping our reference from descendant to ascendant. While the prototype chain exists and might be able to provide that information, we have no way of teasing the type from the object stored in the prototype without at least knowing what it might be before hand. Adding this property ensures that we know exactly which class another derives from. And, we can use this new property to pass arguments to the constructor of the base class. Now extending a base class seems a little more elegant and we're not stuck with a messy global function:

function GMap2Extended(message,element)
{
GMap2Extended.base.call(this, element);
this.enableContinuousZoom(); // the GMap2 methods are available
this.mymessage = message;
}
GMap2Extended.Extends(GMap2);
GMap2Extended.extends(GMap2);
extend(GMap2Extended, GMap2);

var mymap = new GMap2Extended("My map!",document.getElementById("map"));
alert(mymap instanceof GMap2); // alerts true

Tuesday, October 14, 2008

Interface in Javascript

While there are other suggested ways to emulate Interface in JavaScript, I prefer the method I demonstrate (below) because of it's simplicity. If you really want strict enforcement, you'll need a system that implements decorators (see Decorator Pattern) which may provide strict[er] enforcement at run-time but ends up looking a little less than elegant (note the sarcasm).

For my purposes, I just want to assist development of complex JavaScript solutions by reducing complexity and avoiding inadvertent mistakes. This is not a solution that ensures strict enforcement nor are any of the solutions I put forth in writing JavaScript in an OO manner.

In OO terms, an Interface is a contract by which the implementing class (or in the case of JavaScript, a function) adheres. An Interface describes a desired behavior and enforces adherence for any Class which implement the Interface. Implementing interfaces, again in OO terms, is like saying OneClass 'IS LIKE A' NutherClass, or more correctly 'PROMISES TO BEHAVE LIKE', in contrast to Inheritance where you might say that a DerivedClass 'IS A' BaseClass.

As JavaScript applications are becoming increasingly complex with multiple team members participating in the design and construction process, we need a means to enforce the Interface contract beyond just commenting the JavaScript code. The Interface is common in many design patterns and, while we can omit it's use, the consequences are that the programmer[s] must remember all instances where they intend to implement a grouping of functionality or behavior.

Since the Interface is not available in JavaScript, we are forced to emulate it in the most elegant and functional manner we can through JavaScript (read as 'duck typing'). Here for your benefit is my take on a simple way to emulate the Interface in javascript:


function Implements(implementer,pseudoInterface)
{
for (prop in pseudo)
for (prop in pseudoInterface)
if (typeof pseudoInterface[prop] === "function" )
if (typeof implementer[prop] != "function")
throw new Implements.FunctionNotImplemented(implementer.constructor.name,prop,pseudoInterface.constructor.name);
}

Implements.FunctionNotImplemented = function(implementing,implemented,pseudoInterface){
this.implementing = implementing;
this.implemented = implemented;
this.pseudoInterface = pseudoInterface;
};
Implements.FunctionNotImplemented.prototype.message = function(){
return this.implementing + " does not implement function: " + this.implemented + " as contracted in psuedo-interface: " + this.pseudoInterface;
};

function IUpdateable(){}
IUpdateable.prototype.update = function(){};
IUpdateable.prototype.sendupdate = function(){};

function myClass()
{ Implements(this,new IUpdateable());
}
myClass.prototype.update = function(){ alert("this object had it's update method called"); }

function main()
{
try { var myobj = new myClass(); }
catch (e if e instanceof Implements.FunctionNotImplemented) alert(e.message());
catch (e) alert("cannot handle exception: " + e.String()); // log error
}

Notice that it is only at instantiation of the object implementing the interface that a check is performed. You may very well be running code and never see your error unless the offending implementer is instantiated and fails to implement the methods defined in the interface. If we attach our logic to the prototype of Function, we can resolve this problem and also do away with messy global declarations. While we're at it, I'm going to add the ability to implement multiple Interfaces:

Function.prototype._Implements = function(pseudoInterface,opt_options){
for (prop in pseudoInterface.prototype)
if (typeof pseudoInterface.prototype[prop] === "function")
if (typeof this.prototype[prop] != "function")
throw new Function.MethodNotImplemented(this.name,prop,pseudoInterface.name);
};

Function.prototype._ImplementsArray = function(interfaces,opt_options){
if (interfaces instanceof Array)
for (item in interfaces.reverse())
if (typeof interfaces[item] === "function")
this._Implements(interfaces[item],opt_options);
else throw "The Array supplied contains an item that is not a Function";
else throw "The parameter supplied is not an Array";
};

Function.prototype.Implements = function(interfaces,opt_options){
try {
if (interfaces instanceof Array) this._ImplementsArray(interfaces,opt_options);
else if (typeof interfaces === "function") this._Implements(interfaces,opt_options);
else throw "The parameter 'interfaces' supplied was not an Array or Function";
}
catch (e)
{ alert(e.toString()); }
};

That is all well and good but you may have noticed that the implementing object bares no relation to the implemented interface. In other languages, like Java or C#, we can perform a type test to ask if an object IS of a type it has implemented which will result in true. In this way, implementing interfaces behaves like multiple inheritance. Our solution above will not respond the same way with JavaScript instanceof. Because the prototype chain cannot be branched (it is a one-to-one, child-to-parent relationship), our Interface solution appears more like a pseudo-Interface.

Of course, we could insert the Interface(s) at the top of the prototype chain but if our implementing class(es) were derived from classes not implementing the interface, errors would be thrown and brittleness introduced into the solution. This would become, certainly, more of an issue were one deriving from classes in an external package (such as the Google Maps API) where we have no (classical) means to ensure due diligence in regards to the contract defined by an Interface.

Yes, JavaScript is unique in it's flexibility and we can simply add methods (properties with a value of a function) to the prototype of the, for example, GMap2 function. But that is contrary to legibility where, in the case of Google Maps API and other external API's, the package is opaque (through obfuscation, compression, and in some cases encrypted) aside from the public documentation.

Again, JavaScript is unique in this ability but the well-rounded programmer uses several languages that when aggregated, can and should be composed complying (where possible) to a common or similar form within OO style.

Yet I'm still unsatisfied with our inability to use instanceof in detecting a class that implements a given Interface. The solution is to write our own instanceof and attach it to the prototype of Object:


Object.prototype.isInstanceOf = function(pseudoInterface){
if (this instanceof pseudoInterface) return true;
else
{
try { this.constructor.Implements(pseudoInterface); return true; }
catch (e) { return false; }
}
};


And there you have it, a not-too brittle means of implementing interfaces in JavaScript.