January 25, 2008

Control Windows Media Player from an Adobe Flex application, part 3 (the sources)

As I promised way too long ago, I have now finally published the sources for the demo. You can view the sources here.

January 07, 2008

Sensible RIA engineering: handle events where they should be

In general, separation of concerns in software engineering is a very sensible idea. We have all come to understand this. Loosely coupled and consistently engineered components and applications are better reusable and maintainable. This also goes for Rich Internet Applications (RIAs).

My point
In this post, I just want to briefly touch on sensible event handling in the light of the above. The point that I want to make is that events should be handled where it makes sense. An event is something that happens at the perimeter of an application or a component. It could be a user gesture like moving the mouse pointer over a part of the surface of a visual component. But is can also be the reception of a response from an asynchronous invocation of a web service.

Handling user gesture events
Let's have a look at events generated by the user first. An RIA is basically a tree of UI-components. Its leaves are the "sensors" for user gestures. components closer to the root should not be bothered with fine grain events. A component should pre-process local events and generate chunkier events for the lower branches.

A reasonably sophisticated RIA framework like for example Adobe Flex, Google Web Toolkit, or Backbase usually allows for hierarchical composition of the UI components and supports standard UI-concepts such as views, dialogs, containers, layout managers, controls and events.

A RIA framework (and most object oriented GUI frameworks) generally supports the following two concepts: containers and controls. The first concept comprises all components that - as the name implies - contain other components (children). A container often determines the layout of its children and handles events triggered by these children. Examples of containers are windows, panels and decks.

All other visual components fall into the second category. These are the controls that allow the user to have meaningful interaction with the application. Examples of controls are button, scrollbar, check box, list box, slider, input box, et cetera. A control instance always has a container instance as its single parent.

Because of polymorphism, all containers are controls too, allowing for recursive containment.

By means of events, controls communicate user gestures (key presses, mouse moves, mouse clicks, screen touches, voice commands, you name it) to other components that are eagerly listening for these events. It is a good practice to handle user gesture events within the environment of the originating component. A container should therefore capture and possibly process the events generated by its children and generate and broadcast higher level (composite) events within its own environment, which in turn should be captured and handled by the container's parent. In short: handle user gesture events close to their source, generate higher level, composite events for higher level handling.

An example:

Consider an online registration form that contains a number of input controls and an OK and Cancel button. The input controls generate events informing their parent (the form) that, they have received or lost focus or that their contents have changed. The buttons inform the form about mouse clicks on their surfaces. All controls contained in the form handle mouse-overs and outs by themselves, as they should.

The form will deal with the content change events to do validation and respond to a click on the OK button to collect all inputs and generate a composite event such as "RegistrationCompleted" that notifies the form's container about the fact that the user has correctly filled out and submitted the registration form.

Handling web service responses
These events are received at the "other end" or the backside of the application. Here's the path they usually travel:

  1. First, the RIA asynchronously invokes a webservice. This done in a separate, new thread that keeps listening for the response until a certain timeout has occurred.
  2. When the response from the web service arrives, the designated response handler in the RIA is invoked.
  3. The response handler changes a part of the RIAs internal state (i.e. it updates a part of the model).
  4. All updated model parts generate a separate state change event that, in the end, will trigger observing views to repaint themselves in order to reflect the state change.
So the effect of a web service response travels through the application from back to front. An RIA should process the responses received from web services and generate and broadcast lower level (more specific) events to In short: handle web service responses close to the backside of the application, generating lower level, finer grain events for visualizing the effect of the response.