Associate director of undergraduate studies

For the next few months I’ll be occupying the position of associate director of undergraduate studies of the Computer Engineering course, left by Prof. Ivan Ricarte, who got his full professorship at another academic unit of UNICAMP. Currently, the director is Prof. Helio Pedrini of the Institute of Computing. Prof. Akebo Yamakami has kindly accepted to be my “vice-associate”, an informal position that exists due to the direction being shared between two academic units. This is good news, because I’m a rookie in what concerns academic administration, while Prof.  Yamakami has been involved in undergraduate studies direction since… forever. His experience will be inestimable.

I was appointed by the Electrical and Computer Engineering School steering committee in an indirect election, for a provisional mandate. Next June, the entire electoral college (faculty, staff and students) will vote for the next director here at FEEC, and for the next associate director at Institute of Computing, since the positions switch between the two units at the end of the mandates.  (I know, I know — it’s complicated — but you get used to the idiosyncrasies of Brazilian public administration after a while…)

I thank my colleagues of the steering committee for their trust.

Am I forgetting anything ?

I have just realized : the most important event in my professional life since the Ph.D. viva-voce defense went unannounced in this blog. I have been recently accepted as a faculty member of the Department of Computer Engineering and Industrial Automation (DCA) of the School of Electrical and Computer Engineering (FEEC) of the State University of Campinas (UNICAMP). I am now officially an absent-minded professor.

Balancing a faculty career, with research, teaching and administrative obligations is more challenging than people outside academia usually realize. For the last 5 years, I was exclusively focused on research, so I am rediscovering the thrill of being in a classroom. I am also discovering the painstaking work needed to sustain academic institutions, for, if their horizontal, democratic nature warrants their members many freedoms, they require in return much debate, discussion and politics.

Nevertheless, I am loving every minute of my new duties. I know that the passing years take their toll, but for the moment, at least, I am in my element.

Cooperations, developments, projects: taming the complexity of research

In the last days I’ve been working round the clock — I’ve given my kNN search talk twice and several call-for-papers deadlines went by (or are coming), which meant that I was always in hurry for one reason or another.

This has been especially stressful, because for some time I had been getting everything done days ahead of the deadlines. But this month I went back to the classic regime of “crossing fingers and hitting the submit button at (literally) the last minute”.

As I move towards more complex research involving several labs, many students and audacious experimental designs I feel increasingly the need of using more formal management tools. But techniques created for Business (or even Engineering) do not seem to translate well to Academic research.

For example: I’ve tried to use Gantt charts in Microsoft Project to keep track of complex tasks and their dependencies. But I’ve found that as the work progresses, the list of tasks changes often and significantly, as some research directions reveal to be more fruitful than others. This ends up rendering the initial planning (and any chronogram based on it) useless.

Maybe is it the case of using an adapted “Spiral Model“, where risks are minimised from iteration to iteration, and creating detailed chronograms only for the lifetime of an iteration?  Or are there specific management models for research projects? How the most efficient R&D labs manage their projects? How to adapt their experience to Brazilian public research?

Unfortunately, so far, I seem to have more questions than answers. I’ve bought this interesting book “Managing Science: Management for R&D Laboratories“, which is biased towards Particle Physics labs, but has (hopefully) useful concepts for all areas of experimental research and will help me to get an initial handle on the subject.

The Quest for Scholarly Tech: Ubuntu

Since the last year of the thesis, I have been in such a hurry, that I haven’t been able to keep up with technology. But now, I have a few spare days and I’m using them to evaluate some new (?) services and software, in my eternal quest for better tools for academic work.

Linux is by no means new software, and I have been using it on and off for more than ten years. During my undergrads, I had already noticed a cult following of the penguin, and nothing was deemed more uncool than proprietary OSs. But having anything Linux-related (including the OS itself) meant downloading the sources and compiling them, which was enough to keep me at bay (I made an exception when I took the subject of Operating Systems, one of the rare occasions when one should be expected to compile an OS kernel). Besides, those were the years of religious wars, the simple mention of the words “Windows” and “Linux” in the same room was enough to spark hours of heated debate, with very low signal-to-noise ratio — not the ideal environment for gathering useful information. At those troubled times, I used to ask my friends, provocatively: “what do you think will happen first: Windows getting stable or Linux getting usable?”

Ironically, I think both (sort of) happened by now. We have been graced with some good Windows crops, like 2000 and XP, which have reached a nice point of stability. And some Linux distributions are almost user-friendly. I don’t think that the old mantra of “Linux (or Unix) on the server, Windows (or MacOS) on the desktop” is completely overthrown, but we are getting there, in both ways.

Specifically, I’ve been evaluating Ubuntu, the famous “Linux for human beings” distribution, which has been around since 2004. I’ve downloaded and installed it last November, and my first impression was extremely positive: installation was a breeze, the interface is pretty, there are lots of GUI-accessible tweaks, and the package management system actually works.

Installation and hardware compatibility has always been the Achilles’ heel of Linux distributions, but it also has been one of the aspects which has most improved. Not all equipment guarantees an equally worry-free process, but the list of supported hardware has also grown dramatically. In my case, the installation, on a 3-year old Dell Latitude D620 laptop, went uneventful. My guess is that if a system is both popular and old enough, people will have their drivers figured out.

The first thing that strikes on Ubuntu is how cute it is. From startup and shutdown screens to configuration applets, everything is design to shield users from the Unix innards. Ubuntu is the first distro I’ve ever seen which has an usable “Start Menu” (instead of the usual everything-but-the-kitchen-sink approach). The system feels easy, inviting exploration and discovery.

Ubuntu's "Start Menu" and Quick launch bar

Ubuntu's "Start Menu" and Quick launch bar

In the best tradition of Linux systems, (almost) everything is customisable, but breaking with tradition, the system is (almost) comfortable to use fresh out of box. That didn’t prevent me from trying-on a bit of everything, tweaking themes, font rendering, keyboard layouts, shortcuts, and everything which was at the reach of a mouse click. At some point I had played so much with the shortcuts, that the system got practically unusable. What to do now? Well, in Ubuntu (courtesy of GNOME) customisation is done by adding files to the user folder, not by modifying existing ones, so, deleting those files (located in the ~/.gconf folder) reverts the system to its pristine virgin state.

Tweaking Ubuntu's GUI

Tweaking Ubuntu's GUI

I was also quite pleased with the package manager, Debian-based distros are the ones which got it right. Or so I’ve heard — and based on the nightmarish experiences I’ve had with Mandriva’s URPMI it is probably true.

Now, for the downsides.

Well, Ubuntu is not Windows, and I have to retrain my “muscle-memory” to new ways of doing things. So far, what I’ve found most difficult to surmount is the difference between the two system’s “US International” keyboard layouts. While Linux’s targets all Latin script languages, Windows’ layout is optimised for Western European ones (which is great for me, who write in English, French and Portuguese). This makes for unexpected results: in Windows if you want to write “do’s” (as in “the do’s and don’ts of laying out keyboards”) you just type [d] [o] [‘] [s] and the system, recognising that “s” is not accented in any Western European language, interprets the [‘] as an apostrophe. In Ubuntu, you have to type [d] [o] [‘] [space] [s], otherwise you’ll get “doś”. Worse, if you want to write “don’t” you still have to type the space after the apostrophe, even if there is no accented “t” in Latin script, otherwise you just get an ugly beep. So far I’ve been unable to circumvent this annoying behaviour — editing the keyboard layout files seems to be useless, it has to do with how the system deals with dead keys.

I’ve got quite a few bugs also. Small ones (no keyboard tweaks involving the Caps Lock key would work properly) and big ones (trying to record sound would freeze the system, no connection to wireless networks). Ubuntu comes with periodic updates, so the big ones have already disappeared, but some of the small ones are still around.

I had to tweak quite a bit with the installed packages in order for the system to work properly. Initially it had both GNOME and KDE installed, but KDE started to conflict with GNOME configuration daemon. Since I don’t use (so far) anything KDE based, I’ve just removed the KDE packages and the problem disappeared. It was much more difficult to figure out that the problems I was experiencing with Eclipse 2.4 and RSSOwl 2.0 were due to an exotic version of the Java Virtual Machine (based on GNU’s GCJ) which came installed on the system. When I removed everything GCJ and kept only Sun’s Java, my life became much easier.

Incidentally, I was somewhat disappointed when I realised that the Eclipse version available via package manager was 2.2 (almost 2 year old by now!), and to install the version 2.4 I would have to go the “Linux way” (download a tar.gz, create a folder, install the icons myself, etc., etc.).

Ubuntu has some small but irritating limitations. One example: the system has no consistent way to tell an application to run minimised (which comes handy when you want something to run on system start up): either the application has its own non-standard option, either it is impossible. Another annoying example is an ugly, loud beep you get every time the system is unhappy with you. This beep is emitted by the legacy “PC speaker” device, not by the modern sound system, so you have no control over volume or muting. Apparently, the only reliable way to disable this is to put the PC speaker on the “black list” of kernel devices, so it won’t be loaded. Not much user friendly.

Those small nuisances are part of the Linux experience, and though surprising on the proprietary software mind-frame, they are a foreseeable result of that Cathedral and Bazaar philosophy. Obviously, a system assembled from so many articulated parts, each the fruit of a different mind, is due to have some small inconsistencies.

In summary I would say that, for a Linux desktop system, Ubuntu offers a nice experience. I don’t think it is already smooth enough for technically unsavvy users, but it is perfect for midbrow amateurs, who know their 0’s and 1’s but are not inclined to spend hours editing text-based configuration files.

What is worthy and what is puffy ?

The only other scientist in the family is my cousin Laila, and as I try to navigate my way around this “web two dot oh of science”, I can hear in my mind her thoughtful advice: “before you dive into something, you should check how deep the water is”. Wise words: after all, people experience many times the metaphorical broken neck after investing energy into “the next big thing”, which later reveals to be a shallow pond.

Of course, it’s often impossible to know precisely the depth of the waters, especially in the cases that much will depend on the contributions of the user community. What I ask myself is when can we start to foresee the tendency of success or failure of a new technology, for a given purpose?

Take Google’s Orkut, for example. When I’ve first heard about it, my colleagues were talking about a professional network, something like what LinkedIn is today, but quickly it became clear that it would not work for this purpose, and at least from this point of view, it failed. But as a social “for fun” network (like MySpace) it became a major success in Brazil.

For the last few months I’ve been trying a lot of stuff (web 2.0, web 1.0 or not web at all) which I think could be useful for my research team:

Google Tools

I’ve recently consolidated all my mailboxes into a single Gmail account and I am finding the service incredibly convenient. The Calendar is also fantastic, though I can’t believe they’ve left out such an essential item as the Task List. As for Google Docs, I find that they are much too rudimentary, even for lowbrow daily use.

Microsoft Groove

I liked the concept of Groove very much (and the little video demo is really really seductive), but I quickly bumped into the harsh reality that in the academic world, few people like to use Microsoft Office. What’s the use of using a communication platform and then having no one to communicate to?

I’ve considered starting a major evangelising campaign (helped by the fact that MS Office is very cheap for students nowadays) but when I realised that Groove wasn’t available for Mac (trying to evangelise someone into MS Office is hard but possible, but trying to evangelising someone out of a Mac is just a waste of time) and after two or three “sorry, the service is experiencing some problems”, I’ve given up. When evangelising people into a new product, it’s crucial that it works perfectly.

ThinkFree

I’ve been really captivated by ThinkFree. Feature-wise, it is much behind Office 2007 (or LaTeX, for that matter). But it’s way beyond Google Docs, and, since it offers a decent (but by no means perfect) conversion to and from MS Office, and very good web integration, it really deserves a second look. The fact that it is available for three major platforms (Windows, Mac and Linux), helping to avoid religious wars in the lab, is also a major selling point. I’ll be trying it for the next weeks and keep you updated.

CiteULike

I had great expectations for CiteULike, and I still find it is a neat idea: putting your citation database online, sharing it with other researchers, and even creating groups of interest to share and discuss about the papers. However, it seems that most users just use it as a online reference manager, and it there is not much “two dot oh” synergy (discussion, active sharing, blogging, feedback…) happening on the groups I’ve visited so far. But I’ll keep my eye on it.

Zotero

My (silly) prejudice against applications implemented in the form of Firefox extensions has been greatly dismissed by Zotero, a fantastic reference manager. Migrating the medium sized database (about 160 entries) I’ve created in Endnote for my thesis was somewhat tricky, because of my extensive use of custom fields, but after half an hour of adapting Endnote’s Export Output Styles, everything went well. Now, I want to see how well Zotero will work in cooperation with word processors like MS Word, LyX and ThinkFree Write. I also want to see if Zotero + CiteULike can work along well.

LinkedIn

This one seems to be all the rage in the States. After creating my profile on LinkedIn, I was surprised to see that all my past classmates at UFMG, Brazil were already there. But I am still a little sceptical about the usefulness of the concept. I recognise the importance of “networking” but I am more doubtful about putting it online. I suppose that networking is a zero-sum game. If everyone is connected to everyone, the global effect is levelling the game.

The “get introduced to” feature seems interesting, but do people use it?

Academia.edu

Like its own advertisement says this one is “a Facebook for academics”. The idea is not bad, but the current implementation is dreadful: the Flash interface is slow, everything is organised in a rigid hierarchical way (what happens to labs shared by two Universities?) and an information model which favours painstaking “tree-like” browsing instead of direct search. After some hassle I was able to put my profile in Academia.edu, and the site seems to be gaining momentum. Maybe we can hope for a major UI improvement?

A webpage, a blog, one profile here, another there…

I have too long postponed the creation of this blog : a blog where I can discuss my research, my teaching, my professional experience. Perhaps I was too concentrated on the need to finish that Ph.D. — and to publish those papers ! — but it seems that now the academic world has been swept by a fever that formerly only affected my younger sister and their peers. But instead of Facebook or MySpace, researchers worldwide are putting their profiles on linked.in or on academia.edu. And instead of exchanging songs on last.fm, they are trading citations on CiteULike.

“Web 2.0? This is sooo last week!” I can see from here the Ph.D. freshmen rolling their eyes.

Now I still need to publish those papers — and to catch up with the fad!