Saturday, February 11, 2006

Family Relationships

The family album site really needs generalized deduction. The deductions about dates and ages are hand coded:

deduce the date of the photo:

photo.deduceddate =
(photo.date,
{ some subject s | s.birthday and photo.s.age | s.birthday + photo.s.age},
)[1]

deduce the age of the subjects:

all s: subject |
s.deducedage = (photo.s.age, photo.deduceddate + s.deducedage)[1]

One neat trick would be to calculate relative relationships: if the viewer is x and the subject is s then what is s to x ( great great grand uncle or even the more verbose - your father's father's father's brother)

This calls for a Prolog engine perhaps

Possibilities include
  • PyLog a Prolog translater for Python
  • A number of Prolog engines listed here but this looks very old.
Or RDF and Jena?
[sites which are undated are just terrible - I must remember that on my own site]

Sunday, February 05, 2006

Exist NDX web server

Paul has generously provided me with space on his server to install exist in its Jetty configuration. This provides me with an external site which I can use with GoogleMaps. So far I've been putting up the family history site.

Developing on this site is much more immediate that with my typical FTP enabled site. With the Java client, I can work directly on the database , editing code, uploading scripts and photos and executing test queries. There is also a rather safer web interface. Great for rapid development. I was working on improving the XSLT and looking at the problem of aliases (maiden names usually) . I'm so used to using the oh-so-useful generalised = in Xquery that I forgot that Xalan is still on XSLT 1.0 with XPath 1.0, whereas XQuery uses XPath 2.0. This generalisation is the main difference to hit me but its so useful and a pain to have to replace with the inferior contains(). In fact I'm really struggling with alias still.

One idea is to develop this family history with my brother Richard and my nephew, Reddyn, in New Zealand, who I have sadly neglected. Here we could all work together in a very Web 2.0 collaboration. Hope they run with the idea.

Tag Clouds

Just been playing with generating tag clouds, based on the UCAS keywords which I obtained before Christmas. These are now integrated into the view of programmes, but perhaps all such relationships are worth reversing - here keyword to programme - and the obvious way to do this is with a tag cloud. Ian reminds me of my play with Postscript, generating exam result lists in fonts scaled to the mark itself. The big plus here was that when students were sorted into ascending order of the overall average, you could see at a glance how well each separate exam correlated with the overall.

My tag cloud uses xquery and xslt. First pass to compute the tags and counts, then compute min and max counts from this computed element, and a scaling factor, all in Xquery. XSLT generates the cloud, computing a fontsize based on the count. All a bit slow but it does the job.

I've also just realised a feature of most clouds I've seen - double description [Bateson]. That is tags are not only scaled in size but intensity is increased as well. Newzingo uses a 7 point scale, not the continuous variation I'd assumed from my Postscript days, where both font size and red intensity increase together. Clouds differ in the link styles - default isnt appropriate since it's just a page of links, but just what style is best? I've set the title attribute on my tags, containing the course codes themselves for a quick check but this seems to conflict with hover styling.

But I wonder how to use the comparative idea here - so that keywords in one faculty can be contrasted with those in another, in order to visualise the difference between the two faculties. Perhaps the trick would be a single map which can be dynamicly switched between the two - perhaps even programmed to do so, like a flicker star-map comparator. I would need to get the two maps to register, so that there is a single merged map, but words can be turned on or off - perhaps I'd have to drop the size scaling since this will alter the position of text. Just coluring the text by coding for the datasets and varying the intensity proportional to importance.

Since our new VC seems likely to be merging faculties, this might be a useful tool for him to see which ones to merge - quelle horreur - its just the way this visualisation might be used!