Thursday, November 01, 2012

interactive timelines

You have to create, view and provide interactive timelines for your articles you might try out http://whenintime.com/.

Easy to use and customize with only a few bugs all of them can be easily worked around.
And last but not least the service is free of charge right now... but you should not contribute confidential content.

Legacy leads to complexity? Windows 8!

If you ever tried to understand why the success of the past introduce complexity for the future you should read "Turning to the past to power Windows' future: An in-depth look at WinRT".

A massive summary of what WinRT (Windows 8) is and how the bunch of existing technologies fits together creating their next big thing. Read the complete article with a bit IT experience background gives you a very good overview of what they build.

Do you really think that such a complex beast will lead to more consistent and improved user experience "on any device"?

WebPlatform.org

The site "WebPlatform.org" was introduced to collect best practices for new web technologies like HTML5. Many major vendors are contribute to the platform, e.g. W3C, Google, Microsoft etc.

Worth to take a look at.

To understand the state of the technologies so far - the site is provided as HTML5 - but try to play the intro on the homepage...still Flash on FF16 (yes I know it is simple a YouTube hosted video).

Time will change...

Decision oriented user assistance

If you think about business applications (e.g. ERP, PLM, CMS) you get aware that the amount of functions in all kind of application increasing in each new version and most of those application are feature complete in terms of critical functions which are relevant for daily operations.

Does this means that the users are happy with the applications they have to work with? What is the problem of the most application which are available today? Complexity is the major pain of todays software applications.

All major IT trends during the last couple of years leads to additional functions, additional user interfaces and at the end to complex applications.

Examples

  1. IT trend "mobile devices"
    The trend to access services using mobile devices leads to additional UI which make an application accessible from a mobile device. Vendors wants to make their application mobile ready and therefore provide existing functions or a selected subset of the functions to mobile devices. In the best case they optimized the application behavior to the look and feel of the specific mobile devices.
  2. IT trend "social media"
    The trend to socialize daily operation leads to new functions as well. You can now comment the work of your colleague right from within your application - great. In the best case you can collaborate on the same peace of object within your team.

Paradigm "Automation"

But all kind of extensions are focused around one single paradigm. "Automation" of tasks which substitute manual operation. This approach is the major topic for business oriented application since at least 20 years until now.  You can calculate return on invest based on this approach without too much effort and thinking.
On the other hand most of the applications with adequate market penetration already implemented the tasks with measurable value otherwise they didn't win any software selection process.

So far so good. But what should happen next. Does the next two functions really provide enough return on invest to take the effort to upgrade the application? What might be a structure improvement and unique buying point for a business application in the future?

Paradigm "Decision"

Automation is all about efficiency. But at the end of the day each business process or each operational step results contains at least one valuable decision which drives the success of the process output and in many cases the value of result now and in the future.

Making the right decision is more related to ensure the effectivity of the process output without loosing efficiency.

How-To?

To setup an decision oriented application you need two basic principles:
  1. The application must know the business context of the user who operations within the application.
  2. The application must reuse the knowledge of already realized processes

Knowing the business context

This means the application is driven by the business process (the relevant subset which is in the scope of the application). In the most of todays business application the process simple fulfills the job to automate certain tasks and notify different users on certain events (based on states of a resources and state transitions). But the real process is not in scope of the application.

The IT Trend BPM has found his way into many products and projects up to now. Even some business application using a BPM approach and infrastructure to implement workflows.

But there is no application out there which core is based on BPM. This mean that each operation takes place as part of an underlying business process, each function is more or less just a decision of the user how to proceed in the process and each automation is just a replacement of a human task.

Creating such an application means, you have to provide:
  • A collection of automated tasks
  • A collection of human tasks to request human input and choices.
  • Triggers for a user or software to make a decision for the next step (e.g. select "edit content", "send to review", etc.)
  • A backend that lets you model the process using the above building blocks and a back end which let you create, run and complete those processes
  • A UI makes all of the items above visible to the user
Now all functions of the application a invoked in a well know context. So far so good. Sounds like a traditional BPM project, right? And yes there are already LOB projects out there following this approach.

Reusing the business context

What is the next big thing? You are able to store all decisions of your users in the context of your persistent process and use the results to improve the decisions of other users.

In each process different users can learn from the experience of others or individual users can be guided to avoid making a wrong decision again - or reusing best practices from the past. To achieve this goal you have to use the collected information from previous processes and transform them into valuable guidelines for your users:
  • show them what other users did when they are in the same context as the user currently have to do an operation
  • prevent them from doing an operation which leads to errors later on in the following steps of the process based on the experience of previous processes
  • show them additional information other users searched for in the same situation as the current user
  • let them add guidelines for later steps in the same process
  • let them attach additional information they always need if they are in the same context again (e.g. reference material for an operation, etc.)
  • etc.
Also very obvious and easy things can be done:
  • Only provide functions which makes sense in the context of the process (real context aware function) to reduce the amount of choices a individual user can choose from
  • Identify functions no one use in a particular context to simple remove them or add hints / best practices for the users which makes this function usable
  • Identify best and bad practices from what the users did and improve the process (the application) based on real world usage.
  • etc.
With the availability of tools in the area of "Big Data" you might think of enhanced KPIs like
  •  identify patterns in the process leads to results not valuable to your business through querying the process, corresponding data and decisions and the results according of the question you have to answer.
  • identify related choices in the process and the corresponding information as a baseline for process improvement
  • etc.

 Results

This kind of approach leads to
  • structural usability
    The application is possible to guide the users as much as possible and provide as much information as possible for its next relevant decision (operation)
  • reduce complexity
    Only provide relevant information, functions and user information which leads to reduced complexity for the user of the application.
  • social experience (common improvement)
    The application can share information, discussion and best practices in the context those information is relevant for.
  • improved effectivity
    Best practices can be established for all users based on real world experience - not only on theoretical thoughts.
  • improved efficiency
    Critical operations can be identified and additional automation can be added based on real world business value.

Too complicated and complex?


No that is not the case. Using todays IT tools available on the marked place makes this kind of application easy to implement (even the core subset of the mentioned approach as baseline for future extension).
BUT it is not possible to simple extend existing products with this kind of approach without re-implementing the core part of the application from scratch.

This means existing and strong vendors might struggle to do this - but in case you thinking about creating a new line of business application think about using a different approach than your competitors...




Monday, October 01, 2012

Stumbled upon: Write the Freaking Manual

I stumbled upon the thread "WTFM: Write the Freaking Manual" triggered by the following blog post http://www.floopsy.com/post/32453280184/w-t-f-m-write-the-freaking-manual.

I would recommend to follow the thread (which already contains more than 200 thoughts) in case you want to understand:
  • the different views of the developers
  • the different views of users of a certain software
  • the different views of tech writers
I also had discussions with companies creating software products what needs to be documented, is it possible to create a software product which doesn't required additional documentation because of "intuitive usability" etc.

The answer is easy and difficult at the same time:

You have to deliver relevant information for your audience.

This means you have to understand:
  • What is your audience?
    and
  • What is relevant?
 And always keep in mind that  "Your user does not have the same context compared to you".

Example

If you develop a software infrastructure should support other developers to do their job faster you should deliver:
  • orientation for your user (which tasks does the library support)

    the concept of all major parts of your framework from top to down
    => basic overview of all implemented concepts and than describe each concept

    good example is provided by IBM for their ICU library (http://userguide.icu-project.org/)
    This library isn't very trivial but you have well described concepts for all components of the library.
  • how-to setup
    provide how-to to setup the software for initial use
  • how-to use
    provide as many code samples / demos / real working code for the operation of your user
    e.g. by providing your well-documented unit-test library.

How can you identify which information your audience needs? 

You have to understand their daily operation with your software and all questions which cannot be answered by the software itself without additional information in a short amount of time.

If you identified those areas well the resulting documentation will add value to the software and will increase the audience using your software.

Friday, September 14, 2012

HTML5 for any device? yet?

Mark Zuckerberg made an widely recognized statement  on the usage of HTML5 for mobile devices:

When I’m introspective about the last few years I think the biggest mistake that we made, as a company, is betting too much on HTML5 as opposed to native… because it just wasn’t there. And it’s not that HTML5 is bad. I’m actually, on long-term, really excited about it. One of the things that’s interesting is we actually have more people on a daily basis using mobile Web Facebook than we have using our iOS or Android apps combined. So mobile Web is a big thing for us.
source: http://blog.tobie.me/post/31366970040/when-im-introspective-about-the-last-few-years-i
A more technical detailed feedback is provided here: http://lists.w3.org/Archives/Public/public-coremob/2012Sep/0021.html

This means two major things:
  • HTML5 is not ready yet (that is no real news) for a simple replacement of native apps
  • HTML5 is the major enabling technology to deploy feature rich content to the mobile web.
If you ever tried to create a productive web application using the HTML5 stack which should run on "all" common mobile devices you are aware that this is a pretty tough job and still requires to limit the functions to a small subset of functionality and as a result UI experience. In case you have to provide a feature rich application like Facebook you obviously have to workaround hundreds of issues and the result is still not sufficient for an individual user on one device.

A very helpful overview of the state of the different mobile browsers Facebook introduced ringmark a test suite (including results for most common mobile browsers) which shows which relevant API function is implemented on a particular mobile browser prioritized by different levels of importance.

The current state of the standards is published by W3C on a regular basis, latest release http://www.w3.org/Mobile/mobile-web-app-state/

What you see in the test results is that HTML5 can be used in case:
you want to deploy content driven application focus on online access and integration.

In any case you just have to start small, test and verify the behavior for your defined target audience. The HTML5 path is definitely the right path to follow but still requires lot of work from either the vendors and the standardization groups.

Thursday, September 13, 2012

First web page of the INTERNET

I'm not 100% sure but according to this article the first web page of the INTERNET was published by the W3C and is still available here http://www.w3.org/History/19921103-hypertext/hypertext/WWW/TheProject.html.

All links still OK.

In case you create a web page today with this amount of links and come back in 10 years - how many links still pointing to the correct content? Even if you only have links to self owned content. Do you think you are able to reproduce your targets in ten years?


Compare PDF in automated test scenarios

Do you ever had to test a process which creates a PDF based on well defined test data? You want to ensure that the result is equal or confirms to an given acceptance criteria which you can describe by an existing PDF? You want to automate this operation?

In this case you looking for a tool which compares two PDF files and at least provide you the answer to the question "are those two files the same?". Based on your use-case this means:
  • the contained text is the same on the same page of the PDF
  • the contained appearance (layout) is the same
In addition you need an command line interface to use the function within automated test procedure.

As you might know Adobe Acrobat provides a compare function which is very sufficient (see http://tv.adobe.com/watch/acrobat-tips-and-tricks/comparing-two-pdf-documents/)  but requires a commercial license and to integrate this function in your automated test environment isn't simple (from technical and commercial point of view).

Fortunately the tool comparepdf is available as free software. It is very simple to install, integrate and use. It provides different compare modes for the scenarios mentioned above. In addition a rough overview of the kind of difference is provided and can be used to integrate in automated test reports.

Once you have identified a difference you might be interested what and where the difference appearce. Therefore the GUI based DiffPDF tool can be used free of charge. Not as powerful as the Adobe Acrobat compare function but in many scenarios it helps to see whats going wrong without the need to buy a Adobe Acrobat license.

Sunday, May 06, 2012

Not Open Source but Free CCMS (2)

The already mentioned XML based CCMS "Calenco XML CMS" still available (see also my post "Open Source CCMS").

There is one more system which lately offers an already existing DITA based CCMS without any license cost: "SiberSafe DITA CMS". Read the EULA carefully but in case you need something to play with....

Both are no more open source. Their goal is not to get an open and shared development. They simple heading for lowering the barrier for customer entry.

What you see is that both cases are the company driving the implementation want to get in tough with you and both companies offers additional features with dedicated license costs.

I personal expect more product in this domain following the same approach. Why?

The specific domain of "technical documentation" is pretty small and there are many different and small companies out there which providing specific products to support this domain.

Even in huge installations the amount of licenses required to support the users dealing with technical information isn't very huge - this means the opportunity to sell a huge amount of licenses is limited. In addition most of the available tools are similar to each other - with individual advantages but with no structural differences.
 This means this business model does not really scale and the amount required to sale the license is high.

On the other hand having a tool does not improves your information process and therefore does not add any business value to your organization. At the best case it supports your process with automated tasks. But first of all you need a optimized methods and processes (information process) at all before any tool can assist you as best as possible.

This means - the future is not to create and develop products looks like the today's CCMS system available on the market. The future is to create either information process driven productions where technical information is just one use-case OR focus on integration services to get the value out of existing information.

What are the limitations of todays CCMS system. And how more future oriented designs will look like? More to come in future blog posts....

Search and Replace on multiple files

Search & Replace is a common task in data processing environments. You cannot avoid to build process your data to replace or add a word, syntax or even multiple lines of text in several different resources.

If the task can be fully automated, means there is a unique algorithm to transform a resource A to A' based on the content of A than you will look for available methods and tools supporting you to do this kind of operation.

Methods

Regular expressions are very powerful rules to express not only finding common pattern in text based resources but also a good foundation to replace or extend existing content.
Compared to simple phrase based pattern most imaginable rules can be expressed and used as a source for the required transformation.
But regular expressions come with high cost of complexity. It is very likely to defines rules which results in "false positives", means matches that you didn't want to match.

Tools

Doing Search & Replace in the file system on multiple resources (files) is easy for IT people using linux tools like grep, ....
On Windows you also can install those tools and make them a powerful foundation for those kind of operations (see http://gnuwin32.sourceforge.net/packages/grep.htm).

TextCrawler

But not all people like to become an IT expert for simple replacing the term "foo" with "delicious" . On Windows you can use TextCrawler for this. One of the best UI based tools I'm aware of.

It provides
  • simple phrase based operation "Replace phrase A with B" on multiplier files
  • more complex regular expression based operation 
  • and in addition a fuzzy search operation for more advanced search operations
It also supports the use of Unicode characters to search and replace and the processing of files encoded in Unicode (utf-8, utf-16).

To avoid false positives you can
  • preview the hits before actual performing the replace operation
  • use a dedicated regular expression tester to see what exactly match and what will replace
 Search and Replace is something you have to consider harmful but in case you have to do it on a Windows Desktop using this tool is something I can recommend.

Concurrency: low-level design still matters

Todays design very focused on application level design. Using optimized operation for a given software service.

This means you try to create simple, atomic operations which can be called from your business process. Each service can be distributed and scale using mainstream deployment pattern.

So far so good. What you might see once you do this. Running one thread on a single hardware gives you the predictable performance you have to achieve, running 8 concurrent threads each single operation takes a much higher execution time.

You also seen this behavior in one of your applications? Than you probably faced with concurrency issues and once you eliminated all application related issues you get aware of that even today hardware related optimization is something you have to take care of. Really?

I see and know some application in my daily work doesn't scale very well on a single hardware - they are very basic in terms of application related algorithm but they using algorithm patterns causing memory contention....

Thus you still have to understand the low level architecture and ways to optimize the basic algorithm in your code.

Lock-Free Algorithm

Have a look at "Lock-Free Algorithm" to get a very good overview on how such things still affect concurrency behavior of your application. You should also read "Beginners guide-concurrency" from the Trisha Gee and Michael Barker.

You also gets hints and estimation how virtualization might affect you performance.

Summary

Choosing the right hardware still matters in operation scenarios where concurrency is used to scale your application AND scalability is a core success factor of the application.

Monday, April 23, 2012

Open Source: data management and transformation library

Sumbled upon the following post http://flowingdata.com/2012/04/23/miso-an-open-source-toolkit-for-data-visualisation/

Relational data (or data can be stored in a table or matrix) is "old style" but still an an common use-case in todays web applications.

A new JavaScript library called "Miso Project" starts to implement components that simplifies the management and transformation of this kind of data (and will be extended with visualization use-case). This means that you are easy manage relational data on client side which can be very handy in certain use-cases. So its like a client side database with corresponding query syntax.

One of the most common patterns we've found while building JavaScript-based interactive content is the need to handle a variety of data sources such as JSON files, CSVs, remote APIs and Google Spreadsheets. Dataset simplifies this part of the process by providing a set of powerful tools to import those sources and work with the data. Once data is in a Dataset, it becomes simple to select, group, and calculate properties of, the data. Additionally, Dataset makes it easy to work with real-time and changing data, which pose one of the more complex challenges to data visualization work.

In case you have to develop a e.g. simple, standalone HTML application without a permanent server backend this library will help you without adding too much complexity in your (implementation) infrastructure.

Sunday, April 08, 2012

QR Codes in your documents

QR codes are one method to ease the information exchange between classical medias and mobile devices (common usage direct link to the corresponding web page in paper based catalogs, manuals, ....).

But how to create those QR codes without too much complexity?

Google Chart API provides:
  • chart wizard to create QR codes and corresponding styling
  • infographics API to create static images based on posted chart definition (URLs)
viola. You now have a easy to use backend for creation of QR codes using e.g XSL-T.

The following code is taken from "QR Codes in DTIA Output" which shows how to create QR codes for PDF output of the DITA-OT using XSL-FO:

<!-- Insert QR code -->
<xsl:template match="*[contains(@class,' topic/xref ')]
                      [contains(@outputclass, 'qrcode')]">
<fo:external-graphic>
<xsl:attribute name="src"><xsl:value-of select="concat('https://chart.
googleapis.com/chart?cht=qr&amp;chs=100x100&amp;chl=', .)"/>
</xsl:attribute>
</fo:external-graphic>
</xsl:template>
 see: http://ditanauts.org/2012/03/14/qr-codes-in-dita-ouput/

Sample (code: <img src="http://chart.apis.google.com/chart?chs=200x100&cht=qr&chl=http%3A%2F%2Ftrent-intovalue.blogspot.de%2F2006%2F05%2Ftrent-definition.html" width="200" height="100" alt="" />):

QR Code Sample


how to preserve the value of big data over time....

Today the buzzword "Big data" getting more and more popular. Nice label for a common statement "the amount of data and the valuable usage getting important".

There are millions of information and products out there which promise to help you storing and analyzing those data. But one of the major issues with data is not current usage it is the maintenance of the information over time.

The "Web of Data" is one common example. It is the biggest data store we currently faced with. Pretty simple to access and analyze. So far so good. But there is one maintenance of this data (required?). Collect 100 links to resources on the web today. than 24 month later try access them...how many of those links still work, and if they work the resulting information still using the same semantic as it was once you build up the link?

The "Web of Data" currently decided not to maintain data just provide them now, enrich them and just replace them with different semantic...The Web Wayback machine (http://archive.org/web/web.php) is an approach to help individual users to keep their individual value of data for some scenarios.

Now think about your cooperate information you collect right now. The speed and adaption rate of this data will increase and new demands to enrich the data  will appear. Do you ever thought about how you ensure that all that data can be adapt to new needs? Based on my personal experience at least more than 60 % of the over all project costs are related to data migration in IT project dealing with information in a certain domain of the organization. Those costs are related to adapting data to the new tools which maintains the data, converting data between different data models and formats and ensure the quality of the data and their usage in existing business processes.

What does this mean for each IT project dealing with data?

  • Initial load is important
    You always have to define how to get the data you need for the initial start (and not only during the regular operation of your business process) and how to verify that this data is valid for your future need.
  • Expandability of your data might be important
    You can use static data models and tools (e.g. classical relational data models) compared to more flexible approaches like typed graphs of data where content using different models can simpler coexist.
  • Adaptability of your IT systems might be important
    What happens to your existing data once the model will be extended, changed. Do not only take care of the data itself also take into account the relation to the data. Today you only access a specific level of your data few years later some use-case requires you to access the individual step or introduce an additional level not yet exists.
  • Ensure the maintenance of your data.
    Do not "use" any data which you do not have any value in your primary business process. The usage of information requires the correctness of data. Your data will never be correct if the process creating this data does not have any value out of the data itself. This means that the data will be simple partially incorrect, incomplete.
It is and will be the most expensive IT task in your organization "how to preserve the value of big data over time...."