Thursday, May 8, 2014

Loading JavaScript configuration and i18n for Plone add-ons

collective.jsconfiguration is a Plone package that want to give a possible approach to the way of including JavaScript configuration or 18n data inside add-ons.
Although it's heavily targeted to add-ons that contains JavaScript components it will not provide any JavaScript at all, but only three possible ways to get configuration from the server.


In last years I saw a lot of different approach to the problem, looking at the source of many different add-ons; every approach could have some advantage or side effects. I've only one golder rule: the worst you can do is to generate your JavaScript dynamically from the server:
class JavaScriptView(BrowserView):

    def __call__(self, *args, **kwargs):

        registry = queryUtility(IRegistry)
        settings = registry.forInterface(IMyPackage)

        # ...omissis...

        self.request.response.setHeader('Content-Type', 'text/javascript')
        return """function something() {
   var settings_1 = %(settings_1)s;
   var settings_2 = %(settings_2)s;

   // do something

""" % {"settings_1": settings.settings_1,
       "settings_2": settings.settings_2}
This approach "works", but writing something in a programming language through the usage of another is bad and ugly:
  • A developer that's look to your JavaScript code can be astonished when he get that the code in inside a Python source.
  • The more JavaScript code became complex, the more will became unreadable/unmaintainable.
  • You can't use your favorite IDE JavaScript features because he can't understand you are (partially) write a JavaScript source
  • ... (another 10 "because is bad" can be probably found) ...
 So it's clear that the JavaScript must be in a pure .js file.

A (little) step forward can be to split the code above in a "configuration part" and keep the real code (that will read configuration) in another .js file.
This is better, but still you have some JavaScript code (even if simpler) in a Python/.zpt file... and you have 2 JavaScript instead of ones.


Yeah, AJAX is a right answer to the problem: you write a pure JavaScript code that ask to the server, that's normally reply with a JSON, all configuration needed. You have one .js source file and nothing more.

Well... to be honest you have an additional call to the server for getting the configuration, but how this can be a bad thing? It will probably be a very small piece of data, isn't it? So very very fast...
Not exactly.
The Web is full of articles about mobile/front-end development and how to keep high performance. If you want to read something interesting about this argument take a look to the recent "Is jQuery Too Big For Mobile?".
The real enemy is not how big is your data (hey, I'm not talking of a 10Mb HTML file!) but the latency: the network (even the mobile ones) it's quite fast nowadays but the latency is always pretty high (even more on mobile). We must reduce at the minimum the loading of external resources, especially if they are resources that can't be loaded asynchronously.

Going back to our dummy example above: it's clear that the configuration is a required resource for running the something JavaScript function. We can't start using it without having server side configuration (maybe we can do a little better with some advanced approach... I find this way to call a JavaSript library before it's loaded really interesting).

It will be the same also for i18n: we can't draw a user interface before we have the internationalized strings available.
I've some experience with jarn.jsi18n, and I used it in at least a couple of "pure JavaScript" Plone projects (projects where you don't have a template where add data in other ways I'll introduce later).
In one of them I found that the user interface were loaded in english at first access; after the first attempt translated messages were used normally.
This happened because strings were loaded through an AJAX call to the server but if you use the string "too early" (before AJAX response) they were still in primer language. Luckily after the first usage the library cache translations in the browser local storage, a very smart approach indeed.
The library gives no "onload" callback, so I was planning to provide a pull request for this but... you really like the idea? Delay the execution because we need the i18n strings?!

So AJAX is bad for i18n/configuration load?

No! If you are developing a pure JavaScript application and you don't know what backend technology will be used (if any) AJAX in probably the only choice.

But I'm focused on Plone add-ons here; we know what kind of backend we have: why not load configuration directly from the Plone pages, where your JavaScript will be executed?

Load configuration and i18n from templates

When you are facing the i18n problem with JavaScript and you have a template available (so you are developing an add-on with a view, or a viewlet) you can put translation in HTML 5 Data attributes.
This way you don't need any AJAX call and (even better) you can rely on i18ndude (I tried to use i18ndude also with jarn.jsi18n but it's not fully compatible, there's no support for the "default" value of a translation msgid) and zope.i18n machinery. Cool!

The same will be for general configuration stuff: you can put you server side data inside templates, again in HTML data 5 attributes. This is not a new technique, Plone 5 added some some information inside data attributes (and for what I saw also some small encoded JSON data).

But while using template and data attributes is amazing for translation, I don't like too much using this approach for other data like server side configuration: you will probably find yourself convert a lot of strings in other data types or, if the configuration is large, flood the page of HTML 5 data attributes.

Still use JSON

What is the best way to give data to a JavaScript developer? In my opinion the simplest and most direct way is still JavaScript itself, so providing a JavaScript object.
For example: Plone 4 (and also Plone 5) gives to JavaScript developers the portal_url variable that always gives you the URL of the site. Yes, this is ugly because this variable pollute the global namespace, if another piece of code define a global portal_url var one of them will be overwritten, ... I'm also sure that will change in future, probably it will became a new data attribute, but it's still the quicker way to read that information from JavaScript.

There is an old and well know convention on how not spawn global vars all around the global namespace: put them in a data structure: instead of calling "portal_url" a "plone.portal_url" could be a lot better.
However a lot of purists neither like this approach; in facts you are still polluting the global namespace and the possibility that another JavaScript code don't define a "plone" var is near to zero... but not zero (however I still like this, and I kept it in collective.jsconfiguration).

What's left is still a JSON data source, but we don't want to use AJAX. Uhmm...

The last chance: a lot of super-power JavaScript framework started to use client side templates.
The technique is really simple: define a new script tag of a type that the browser will not know and so it will not try to execute, and put inside it what you want.

You can use this script to store demi-HTML code, or other type of data... as a plain JSON


This is the way used by collective.jsconfiguration: it simply register a new viewlet (in the page head) and wait for you registration of additional configuration (from 3rd party products).

Add-ons can register three different types of them:
  • type=text/collective.jsconfiguration.json
    It will store a JSON data inside the script tag. The source will be available to be executed by JSON.parse.
  • text/collective.jsconfiguration.xml
    It will store any kind of data, but it's designed to be used with demi-HTML, as a view/page template output. As the HTML is inside a script tag you are not really forced to use an XHTML or HTML 5 data attributes but you can simply provide an XML.
  • text/javascript
    This is exactly like the JSON case, but it's for guys that like the idea to provide their data in a pure JavaScript plain object.
It is especially useful if you are developing an add-on with JavaScript but without any server side rendering element (no template, no view, no viewlet, ...) because the JavaScript will find the configuration and translation you defined in the HTML head of your page: you only need to configure what to put inside. integration

When talking of configuration and user preferences you will probably store your ones inside the Plone registry. Using collective.jsconfiguration and collective.regjsonify you can store inside your JSON or plain JavaScript object the same data stored in your add-on registry sheet.

Example application

For better understanding how collective.jsconfiguration (and collective.regjsonify) works, you can check the example application.

OT: goodbye Plone (for a while)

This was the really-last product in my list of "Plone add-ons that I think could be cool or useful" so I decided to stop spending time on Plone development for a while.
To keep myself busy I will probably start learning some new technology, or I will try to finish my Plone Workflow book...
Who knows?

Saturday, April 12, 2014

"General Type": a new criterion for Plone Collections

A new 1.2 version of has been released.
There are some improvement and bugfix but I'm particularly interested in one new feature: customizable parsed query. Why?

Some times ago I started developing a new product for providing some usability enhancement in type categorization using Collection but it was a dead end: it wont work without patching Plone. But the accepted pull request changed everything, so here it is: collective.typecriterion.

The product want to replace the Collection's "Types" search term (adding a new search term called "General type").

The scope of the add-on is to fix some usability issues with Collections:
  • Users don't always understand all of the content types installed in the site
  • User don't always get the difference from a type to another (classical examples: Page and File, or File and Image)
Plone type collection criteria Also there are some missing features:
  • There's not way to quickly define a new type alias or exclude types from the list
  • There's no way to group types under a general (but more user friendly) new type
Some of the point above could be reached searching types using interfaces (through object_provides index) instead of portal_type (the attribute that commonly store primitive type name of every content, but:
  • although search by interface is the suggested way to search by types, it's not used anywhere by Plone UI
  • using interfaces lead to inheritance behavior (which is great... until you really want it)
  • sometimes you don't have the right interface to use. For example, there's an ITextContent interface in ATContentTypes, but it's implemented only by Page and News, not by Event. And generating new interfaces is a developer task
The idea is to keep using portal_type but give administrators a way to group and organize them in a more friendly form.

After installation the new control panel entry "Type criterion settings" will be available.
Plone general type control panelThe target of the configuration panel is simple: is possible to group a set of types under the cloak of a new descriptive type name. In the example given in the image we take again definition of a "textual" content (a content that contains rich text data), grouping all the know types.

After the configuration you can start using the new search term.
Plone type collection criteria Usability apart there's also another advantage in this approach, that is the integration with 3rd party products.

Let say that you defined a new general type called "Multimedia" and you configure it as a set that contains Image and Video, and let say that Video went from the installation of product.
After a while you plan a switch from to What you need to do is simply to change the configuration of the general type, not all the collections in the site.

Finally an interesting note: the code inside the collective.typecriterion is really small. All the magic (one time again) came from Plone.

Thursday, February 20, 2014

Plone, HTML 5 Canvas and Face Detection with Webcam

One of my latests articles was about HTML 5 Canvas and Webcam integration and in the same article I put all together in a Plone add-on for integrating portrait changes with Webcam.

Recently, following a retweet of one of the cool guys I follow on Twitter, I randomly hit an article that talk about a game that integrate user's webcam as a controller (unluckily I lost the original link, neither I remember the programming language used. The article were also generally introducing face recognition, and this captured my attention.
I asked myself how cool could be getting a face recognition feature with Python. It this something possible to do? Probably not so easily...

Introducing OpenCV

...or not? If you look for "face recognition" and "Python" on Google you'll always get references to OpenCV, the Open Source Computer Vision Python library.
I simply scratched the surface of this huge piece of technology as I'm totally a newbie about computer vision. What I get is that the library can really do a lot of stuff, and it's well documented.

Let me introduce some very-general information.

To use OpenCV for detecting faces on an image you must understand the difference between "face recognition" (find a know face on image or video) and "face detection" (find a face, in general). Obviously the second is a simpler task. Why? Because whatever is your task, OpenCV must be trained to find your target. For not simple task like face recognition you can (for example) train the software by providing a set of images where the object face can be found, and the result of the train is an XML file. After that you can you this file to implement something like Picasa, iPhoto or Facebook are doing when you submit new photos.

With face detection things are simpler because you can find one of those XML file online, already generated for you.

Another important informations about the library: recently a deep API changes has been performed so a lot of examples you can find online are broken (or must be fixed).

Finally, when you are able to do so, prefer the use of cv2 library instead of cv. They are more or less the same library but (for what I understand) cv2 is faster because is based on numpy, so C-compiled code.

Going back to what i did: I focused on face detection. Let see how.

Applying face detection to Plone (yes, I said it)

Meanwhile I were also fixing some minor issues in collective.takeaportrait. A new minor feature is the possibility to move the viewfinder by using mouse drag&drop (because must be forced to be in the middle of the screen is not comfortable).
Here came The Idea: how about a viewfinder that automatically center onto the face captured by the webcam?

Here my wish list:
  • JavaScript check for server side availability of face detection feature (just because OpenCV is not a simple library to be installed... and let me be honest: this is a cool feature, but not really useful)
  • With a not-so-long delay the whole Webcam image take by the canvas is sent to the server for a face detection view
  • OpenCV on the server perform the face detection
  • If a face is found, a rect is sent back to the JavaScript callback
  • The viewfinder is centered on the face using the feature already implemented for drag&drop
The use of Plone here it's a bit unnatural but It was the simplest environment for my experiment because of the work already done with the webcam in the last article. Apart Plone itself I hope you'll the general idea: how simple can be the browser/webcam/face-detection integration, whatever will be your back-end Python framework.

As you can suppose, the experiment was a success!

Introducing collective.takeaportrait face detection feature

I don't think I can add more useful details. Let's see the video!

Sunday, November 24, 2013

No more pdb.set_trace() committed: git pre-commit hooks

After 1, 2, ... 5 times it happens, you must find a way to solve the problem.
I'm talking of committing to your git repository a pdb.set_trace() you forgot to remove.

What is really nice of git (something missing in SVN for what I know) is the support to two type of hooks: client side and server side.
While server side hooks are complex and triggered when you push to the repository, client side hooks are simpler and under your control.

So, I can use client side git hook to solve the pdb problem?

Git hooks are executable file (of any kind) inside the .git/hooks directory of your repository. Those files must have a precise name, that match the action they capture (you can find a complete list in the Git Hooks reference).
In our case: we need to use an executable named pre-commit.

As you probably noticed, hooks are inside the repository. But can I have a centralized database of hooks that will be replied in every repository?
Yes, you must use Git templates.
To quick set you global templates, see this nice article: "Create a global git commit hook".
For impatiens, in two lines of code:

$ git config --global init.templatedir '~/.git-templates'
$ mkdir -p ~/.git-templates/hooks

Now put the executable file named pre-commit inside this new directory. After that, when you create a new repository, hooks inside the template directory will be replicated inside the repository.

The pdb commit hook is a well know problem, already solved. The best reference I found is in the article "Tips for using a git pre-commit hook", that use a simple bash script.

I simple changed the author idea a little, because I don't want to block commit when a commented pdb is found in the code (bugged! See next section!):

git diff --cached --name-only | \
    grep -E $FILES_PATTERN | \
    GREP_COLOR='4;5;37;41' xargs grep --color --with-filename -n \
    -e $FORBIDDEN_PATTERN && echo 'COMMIT REJECTED Found "pdb.set_trace()" references. Please remove them before commiting' && exit 1

There are some other info you'd like to know about client side hooks:
  • You can ignore the hook for a single commit (git commit --no-verify)
  • Only a single hook can exits in a repository. This is a limit, however you can find workarounds.

EDIT (21 December)

Normally I don't modify old articles, but I found a bug in the solution above. The problem: grep command return non-zero status code when it found no match.

Here another working solution (I'm not a Linux bash ninja... any other cleaner suggestion will be accepted!):

git diff --cached --name-only | \
    grep -E $FILES_PATTERN | \
    GREP_COLOR='4;5;37;41' xargs grep --color --with-filename -n \
    -e $FORBIDDEN_PATTERN && echo 'COMMIT REJECTED Found "pdb.set_trace()" references. Please remove them before commiting'
[ $RETVAL -eq 1 ] && exit 0

Saturday, November 16, 2013

Dive into HTML5 Canvas

In last weeks I read a book about the new canvas element of HTML 5: HTML 5 Canvas, by Steve and Jeff Fulton
I don't want to review the book itself (just two word: is ok) but reading it lead me think about "how Canvas can change the Web in future".

First of all: I found all the HTML 5 Canvas feature interesting, but while reading the book I felt a sort of deja-vu. Every time I run one of the given example about drawing and painting, it was like I had already see it on a Web browser...
No, I'm not talking of Flash (I never had any experience with that) but about Java Applet! Yes, I really said "Java Applet"!
When I started Web development 10 years ago applet was "cool" because you were able to do a lot of stuff impossible to to with pure HTML + JavaScript. But you already know that applet failed.
Now using canvas you can do (again) a lot of drawing work, but still: how can this be useful to Web users?

HTML 5 Canvas: The Good Part

I was not able to find every answers myself, so I asked to Twitter and I get back some useful feedback..
  • Videogames development. A lot of this. In my opinion this is great! I don't have time for developing videogames anymore, but is still a discipline I like to follow. A lot of new JavaScript frameworks for game development are coming out, thanks to canvas.
  • Graphs and Histograms. We have a lot of powerful graph generator for server side languages, that can create images on the fly, but we can now stop query a remote server for that (remember that HTML 5 means also "bein offline").
  • File handling and preview. As HTML 5 is able to manage files, and canvas can also manage some type of media like images, audio and video (see below), plugins like jQuery File Upload are now possible.
I get some more good examples, but in the meantime I continued reading the book, and I found new super-cool answers myself! So: keep reading.

HTML 5 Canvas: The Super Fancy Cool Part

First of all: video!
I don't want to simply talk about HTML 5 video support or how you can control video with JavaScript, but what you can do with video and canvas.
This can be sum in a simple sentence: canvas can handle a video like it were an image. What does it means? You can draw the current video frame (taken from a standard HTML video element) inside a canvas; write the current video frame every 20 milliseconds and you are really playing the video inside it.

Why is this cool? I can already see a video element inside my HTML... so?
The great part is that you can draw a video frame inside the canvas then draw other stuff on the top of it! You can add comments and images on the running video! Wow!

And now the best part: HTML Media Capture API.
A lot of browser already support this new technology that make possible from JavaScript to access native webcam (and microphone) of the user's device. And how JavaScript can use this privileged access is not putting it into canvas?
And after that? Can I upload my work to a server? Yes!

Playing with this new toy I spent some times developing a new Plone add-on: collective.takeaportrait. This sum all the stuff I learned about media capture and video manipulation:
  • If the browser support getUserMedia call, a new button appear
  • The button will open an overlay where the webcam output is displayed
  • A viewfinder with the standard Plone ratio for user's portrait and a countdown are drawn over the streaming video
  • Use can save a photo and send it to the server for replacing the current portrait

This is something that you already see somewhere, some social networks (for what I remember, Facebook for sure) already give that chance to users, but it's really amazing to see how few lines of JavaScript code can raise the usability of your site!

Now the bad news (also reported in the Fulton & Fulton book): there's no support for those features inside mobile devices right now. Really sad.

Saturday, October 19, 2013

Reusable jQuery plugins with Bower

Inspired by the new article of Maurizio Lupo named "Reusable javascript modules with Bower" and by a recent discussion we had at work about modern front-end development (mainly focused on the Plone world), today I took some minuted to test bower.

To quickly explain bower to a Python developer I can say that it's "the Distutils of the JavaScript world".
I'm not a front-end developer because I think that "knowing how to do some JavaScript" doesn't mean being a front-end guy, however I like the direction where JavaScript is going. But note: take the rest of this article like it is: a "Bower for dummies" note!

In my last article I quickly introduced the jQuery Plugin site way of keeping update it's database: a simple JSON file. Bower is doing the same for populating its component registry site.
So an "official" jQuery plugin can be also a bower component.

Step by step

First of all you need to install the node package manager (npn), and for a MacOS user is really simple (same for Linux guys):

    $ sudo port install npm

After that you can install bower.

    $ npm install -g bower

No we'll go back to our jQuery plugin.
First of all you need the bower.json file, but instead of writing it manually, let's simply type...

    $ bower init

... and answer to the questions. After that you can go inside the file and add some other missing stuff by looking and the available syntax.

This is a possible result:
  "name": "waria-checkbox",
  "version": "0.2.2",
  "homepage": "",
  "description": "jQuery WAI ARIA Compatible Checkbox Plugin",
  "main": "jquery.waria-checkbox.js",
  "keywords": [
  "authors": [
    "Luca Fabbri <>"
  "license": "MIT",
  "ignore": [
  "dependencies": {
    "jquery": ">=1.7"

The bower.json file is really similar to the file needed by the jQuery plugin site, but it's not the same (a boring task: you must keep updated both files).
As we are focused on jQuery plugin, note that jQuery (that can be installed using bower) is defined in the "dependencies" section.

Finally you need to create a new git tag:

    $ git commit -am "Now is possible to install the plugin by using bower"
    $ git tag -a 0.2.2 -m "Tagging version 0.2.2"
    $ git push --all
    $ git push --tags

Last step: register the plugin onto the bower registry:

$ bower register waria-checkbox


    $ bower install waria-checkbox
    bower cached         git://
    bower validate       0.2.2 against git://
    bower cached         git://
    bower validate       2.0.3 against git://>=1.7
    bower install        waria-checkbox#0.2.2
    bower install        jquery#2.0.3
    waria-checkbox#0.2.2 bower_components/waria-checkbox
    └── jquery#2.0.3
    jquery#2.0.3 bower_components/jquery

Now inside the bower_components folder we have both the plugin and jQuery.

Sunday, September 1, 2013

Extending jQuery selectors and facing conflicts with querySelectorAll

You know that is possible to extend the jQuery selector capabilities?
Just type...

$.expr[':'].foo = function(element) {
... and this function will be called when :foo selector is used.
Nothing new on this side.

Recently I started working on a new jQuery plugin and to make things simpler I needed to find a way to override an existing jQuery selector. Again: nothing new; some years ago I found that the method described in the article above can be also use for override, and not only for extend.
To make things more testable I decided to move this behavior into another (separated) jQuery plugin, which is the argument of this post.

When I used this method again nowadays I found unexpected results.
I was looking a way to change the way :checked and :checkbox selectors work, so I defined...

$.expr[':'].checkbox = function(element) {
$.expr[':'].checked = function(element) {
This is what I found:
  • :checkbox was working as expected
  • :checked was not working as expected
In facts, the :checked selector was only working when using it inside a .filter() call.
I'm not a jQuery core expert, I never looked at it's code very much, but this time I needed to investigate my problem. Also: I need to make this work on jQuery 1.7 and more modern 1.10 version, and codes are quite different.

Here what I found: both jQuery versions contains a method for capturing the :checked selector, but it's only called when you call filter() (so here my override attempt works as expected). For normal selectors jQuery now heavily relies on native querySelectorAll API for every browser that is supporting it.

This is the core of the problem: the :checkbox selector is a non-standard ones (not defined by any CSS specifications) while :checked is a know CSS selector. So browsers that support the :checked selector for querySelectorAll are calling this native API.

This is someway hilarious! While querySelectorAll are making our browser (and jQuery usage) faster, it's lowering jQuery extensions capabilities.

I found no smart way to change how querySelectorAll works (and probably there's no way at all, I think we are at C compiled code level here).
The trick I used is to disable the native querySelectorAll when the selector contains :checked, and in that case call my jQuery version instead.

    var pattern = /\:checked/;
    if (document.querySelectorAll !== 'undefined') {
        // need to partially disable querySelectorAll for :checked
        document.nativeQuerySelectorAll = document.querySelectorAll;
        document.querySelectorAll = function(selector) {
            if (pattern.test(selector)) {
                throw('Native ":checked" selector disabled')
            return this.nativeQuerySelectorAll(selector);

The first step is to disable (keeping a "backup") querySelectorAll, changing it with a custom function. Then all I need to do is check if the :checked selector is used somewhere in the query. If not: just call the backed up querySelectorAll, but if it's called somewhere I simply need to raise an exception.

This is the interesting part I found looking at the jQuery source: jQuery core try to use querySelectorAll every time is possible, switching to internal JavaScript code only when it's not supported. In this way I'm simulating the fact that my modern browser is not supporting :checked selector for querySelectorAll.


I found this experiment interesting, but there's some things I don't like:
  • I'm disabling the native querySelectorAll usage also when it's basic features are enough (i.e: if I really need to load only checked checkboxes and not other fancy stuff)
  • I'm wrapping every querySelectorAll calls inside the selector check, making all calls to querySelectorAll slower
  • I want only to extend jQuery here, but I'm also disabling all native querySelectorAll calls if the query containes ":checked"
  • I'm changing how JavaScript works. This is calling me back to times when I used prototype.js (that I never liked)
Any suggestion for a better code way are welcome!
This is the result of the experiment: jQuery WAI ARIA Compatible Checkbox Plugin.


Apart the problem described in this article, I let's spend some words on the "new" jQuery Plugin Site. Last time I published a jQuery plugin (lot of time ago) this site was a total mess: you need to authenticate, upload source tarball, write documentation on a wiki-like page, ...

I really liked how it works now: if you want to publish a plugin, just put it on GitHub and configure a commit hook! Simple and amazing!