We can’t tell you just yet…

(This entry is cross-posted from my lab’s blog.)

Anyone who’s ever worked in the frontier between Science and Innovation has faced the dilemma of secrecy versus disclosure : the scientific spirit demands full publication of every implementation detail — a result that cannot be reproduced is not a result — but when you are seeking intellectual property rights, you are often forced to withhold some details until you’ve got that patent.

We have faced that quandary during our participation in MediaEval’s Violence Detection task : the Scientist in us wanted to just tell everything. But the research project that resulted in our participation in that competition is not just a scientific project, it is also about innovation, in partnership with Samsung Research Institute Brazil. As such, some details had to remain concealed, much to the frustration of everyone’s curiosity.

Fortunately, the task organizers took it in stride :


…that good-natured ribbing got everyone laughing at the task closing meeting !

We are sorry for the teasing, guys. We promise we will tell you everything soon… just not yet.

(Kudos to Mats and Martha for their good humor !)

Associate director of undergraduate studies

For the next few months I’ll be occupying the position of associate director of undergraduate studies of the Computer Engineering course, left by Prof. Ivan Ricarte, who got his full professorship at another academic unit of UNICAMP. Currently, the director is Prof. Helio Pedrini of the Institute of Computing. Prof. Akebo Yamakami has kindly accepted to be my “vice-associate”, an informal position that exists due to the direction being shared between two academic units. This is good news, because I’m a rookie in what concerns academic administration, while Prof.  Yamakami has been involved in undergraduate studies direction since… forever. His experience will be inestimable.

I was appointed by the Electrical and Computer Engineering School steering committee in an indirect election, for a provisional mandate. Next June, the entire electoral college (faculty, staff and students) will vote for the next director here at FEEC, and for the next associate director at Institute of Computing, since the positions switch between the two units at the end of the mandates.  (I know, I know — it’s complicated — but you get used to the idiosyncrasies of Brazilian public administration after a while…)

I thank my colleagues of the steering committee for their trust.

Performance (recall vs time) of a few LSH techniques for general metric spaces

Talk at DCC, Universidad de Chile on Locality-Sensitive Hashing

My colleague Prof. Benjamin Bustos was kind enough to invite me for two weeks to collaborate with him and his students. In the context of that cooperation, I’ll be giving a talk at the Department of Computer Sciences, Universidad de Chile on recent advances of Locality-Sensitive Hashing (LSH), “Advances on Locality-Sensitive Hashing for Large-Scale Indexing on General Metric Spaces”. Among other things, I’ll be talking on recent works of my group on the topic.

Here’s the abstract :

Performance (recall vs time) of a few  LSH techniques for general metric spaces

Locality-senstive hashing, (LSH) initially available only for Hamming, Jacquard, Manhattan and Euclidean spaces, is now competitive for general metric spaces.

Locality-Sensitive Hashing is a family of techniques for similarity search that gained much attention in the literature both for its beautiful formalism and for its ability to perform well in systems where the cost of access to the data is uniform. However, traditional LSH poses the challenge of deducing a completely new family of locality-sensitive hashing functions, which is unique for each distance function. Recently, researchers have proposes works that greatly extend the applicability of LSH, both by creating locality-sensitive functions that work for generic metric spaces, and by redesigning the algorithm to work in distributed-memory systems, whose cost of access to the data is not uniform (NUMA). In this talk, I’ll introduce LSH formalism, and then focus on those recent advances.

The talk will be given in English, while the discussions will be in both English and Spanish.

When : Thursday, September 11, at 14h00 15h00.

Where : Departamento de Ciencias de la Computación, Universidad de Chile. Av. Beauchef, 851, Santiago, Chile, 837-0456.


Paper on Automated Melanoma Screening Accepted at SIBGRAPI

Our research on automated screening for melanoma was accepted for SIBGRAPI’2014, the Brazilian conference on Graphics, Patterns and Images, to be held in Rio de Janeiro next month.

Melanoma is the most dangerous skin cancer type, being responsible for the majority of deaths due to skin diseases. It is, on the other hand, one of the most curable forms of cancer when it is detected early enough. Because the prevalence of melanoma is increasing throughout the world, tools for the automated screening — a test for wether or not the patient should seek a dermatologist — are a public health necessity. Automated screening is particularly important in poor, rural, or isolated communities, with no resident dermatologist.

Extracts of skin lesions. Melanoma (left column) and benign skin lesions (right column) appear very similar, making the task of automated screening very challenging.

Extracts of skin lesions. Melanoma (left column) and benign skin lesions (right column) appear very similar, making the task of automated screening very challenging.

The paper, “Statistical Learning Approach for Robust Melanoma Screening”, advances the state of the art by employing a cutting-edge extension to the bags-of-words model called BossaNova. Here’s the abstract :

According to the American Cancer Society, one person dies of melanoma every 57 minutes, although it is the most curable type of cancer if detected early. Thus, computer-aided diagnosis for melanoma screening has been a topic of active research. Much of the existing art is based on the Bag-of-Visual-Words (BoVW) model, combined with color and texture descriptors. However, recent advances in the BoVW model, as well as the evaluation of the importance of the many different factors affecting the BoVW model were yet to be explored, thus motivating our work. We show that a new approach for melanoma screening, based upon the state-of-the-art BossaNova descriptors, shows very promising results for screening, reaching an AUC of up to 93.7%. An important contribution of this work is an evaluation of the factors that affect the performance of the two-layered BoVW model. Our results show that the low-level layer has a major impact on the accuracy of the model, but that the codebook size on the mid-level layer is also important. Those results may guide future works on melanoma screening.

The fulltext of the paper is already available on my publications page.

In addition, Michel Fornaciali has created a mini-site with extra information about the paper, including the executables for the method we implemented, and the AUC measure of all 320 runs employed in the statistical analysis.

The dataset employed was kindly provided by the researchers of German project IRMA, hosted at the RWTH Aachen University. We are working with them in order to make all the data publicly available.

Call for Contributions — Symposium of Signal Processing @ UNICAMP

The fifth edition of the University of Campinas Signal Processing Symposium
(SPS-Unicamp) will take place this year in September, 15-17th.

This local symposium, promoted by the research community of São Paulo, is gaining importance as a dynamic, interactive event, that offers young scientists the opportunity to network among themselves and with industrial partners.

The call for contributions is open. SPS-Unicamp welcomes papers and mini-courses proposals in the following areas :

  • Biomedic engineering ;
  • Image and video processing, visualization and computer
    graphics ;
  • Signal processing applied to forensis, biometry and bioinformatics ;
  • Control and automation ;
  • Seismic processing ;
  • Communications ;
  • Signal processing applied to sports science ;
  • Theory of signal processing ;
  • Hardware implementation of signal processing

Papers can be written both in English or Portuguese. Both 4-page short papers and 1-page extended abstracts are accepted. Not only original works with results, but also works in progress, and research-project papers are welcome.

Deadline : August 4th, 2014. 

For more information, please check SPS-Unicamp Homepage.

The IEEE Women in Engineering South Brazil student chapter, hosted at Unicamp, and the IEEE Signal Processing Society São Paulo chapter of support this event.

ROC curves for hard exudates using class-based codebooks and comparing our previous approach with BoW [11] and our newly proposed technique with BossaNova. The AUCs are shown on the legend.

Diabetic Retinopathy Paper Accepted at IEEE EMBC’14

(This entry was crossposted with minor modifications from my lab’s blog.)

Our cooperative work on Diabetic Retinopathy has produced a new paper, now in the IEEE Engineering in Medicine and Biology Conference ! This new work explores the BossaNova representation — an state-of-the-art extension to the bags-of-words model in the task of Diabetic Retinopathy classification.

Take at look at the abstract :

The biomedical community has shown a continued interest in automated detection of Diabetic Retinopathy (DR), with new imaging techniques, evolving diagnostic criteria, and advancing computing methods. Existing state of the art for detecting DR-related lesions tends to emphasize different, specific approaches for each type of lesion. However, recent research has aimed at general frameworks adaptable for large classes of lesions. In this paper, we follow this latter trend by exploring a very flexible framework, based upon two-tiered feature extraction (low-level and mid-level) from images and Support Vector Machines. The main contribution of this work is the evaluation of BossaNova, a recent and powerful mid-level image characterization technique, which we contrast with previous art based upon classical Bag of Visual Words (BoVW). The new technique using BossaNova achieves a detection performance (measured by area under the curve — AUC) of 96.4% for hard exudates, and 93.5% for red lesions using a cross-dataset training/testing protocol.

ROC curves for hard exudates using class-based codebooks and comparing our previous approach with BoW [11] and our newly proposed technique with BossaNova. The AUCs are shown on the legend.

ROC curves for hard exudates using class-based codebooks and comparing our previous approach with BoW [11] and our newly proposed technique with BossaNova. The AUCs are shown on the legend.

The full-text preprint is available in my publications page. The conference will be held in Chicago, IL, USA from August 26 to 30, 2014.

Continuing our efforts in making our results reproducible. The datasets used in this work are publicly available at FigShare, under the DOI : 10.6084/m9.figshare.953671. The code employed will be released soon.

Accuracy of binary classification between very popular and very unpopular (middle case excluded) measured by number of repins after subtracting the influence of number of board followers. Error bars are standard deviations.

Paper Accepted at WebSci 2014 on Image Popularity in Social Networks

We had a paper accepted at the ACM WebScience Conference 2014 (WebSci 2014), a study on how much visual attributes can predict the popularity of images on Pinterest (measured as the number of repins). We found that social attributes are more predictive to popularity than automatically extracted visual attributes (not very surprisingly). However, for the heavily followed users, visual attributes respond for a considerable fraction of the deviation from the expected behavior, after we factor out the most predictive social attribute (number of followers). This is shown in the featured image of this blog entry.

Here’s the full abstract of the paper:

Little is known on how visual content affects the popularity on social networks, despite images being now ubiquitous on the Web, and currently accounting for a considerable fraction of all content shared. Existing art on image sharing focuses mainly on non-visual attributes. In this work we take a complementary approach, and investigate resharing from a mainly visual perspective. Two sets of visual features are proposed, encoding both aesthetical properties (brightness, contrast, sharpness, etc.), and semantical content (concepts represented by the images). We  collected data from a large image-sharing service (Pinterest) and evaluated the predictive power of different features on popularity (number of reshares). We found that visual properties have low predictive power compared that of social cues. However, after factoring-out social influence, visual features show considerable predictive power, especially for images with higher exposure, with over 3:1 accuracy odds when classifying highly exposed images between very popular and unpopular.

Accuracy of binary classification between very popular and very unpopular (middle case excluded) measured by number of repins after subtracting the influence of number of board followers. Error bars are standard deviations.

Accuracy of binary classification between very popular and very unpopular (middle case excluded) measured by number of repins after subtracting the influence of number of board followers. Error bars are standard deviations.

The paper was a cooperation between my post-doc Dr. Sandra Avila and I, from the RECOD Lab here at the State University of Campinas, and master student, and students Luam Totti and Felipe Costa, and Profs. Wagner Meira Jr. and Virgílio Almeida, from the InWeb — National Institute of Science and Technology for the Web at the Federal University of Minas Gerais.

The fulltext is available at my publications page.

In accordance to our policy of improving the reproducibility of our published results, both the data, and the code of the paper are available. Due to the restrictions of FigShare, the dataset is on a fragmented zipped SQL dump. I thank Luam Totti very much for agreeing in putting the effort to make that possible.


Regions of interest and points of interest in a retinoscopy image.

Paper Accepted at PlosOne on Diabetic Retinopathy

Our research on automated screening of Diabetic Retinopathy — “Advancing Bag-of-Visual-Words Representations for Lesion Classification in Retinal Images” — has just been published on the leading open access journal PlosOne.

The research work was performed in cooperation by  Ph.D. student Ramon Pires, my colleagues at RECOD Lab. Profs. Anderson Rocha and Jacques Weiner, and Prof. Herbert Jelinek (Charles Sturt University, Australia).

Here’s the abstract :

Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.22.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.

Regions of interest and points of interest in a retinoscopy image.

Regions of interest and points of interest in a retinoscopy image.

The full text of the article is available online.

 The retinopathy datasets are hosted in figshare, under DOI 10.6084/m9.figshare.953671.

The code used in the article is hosted in github. in the repository piresramon/retina.BoVW.plosone.git.


Probabilistic Ruminations on the Mozilla Fiasco

I’m doubly unease : I don’t know how I feel about the whole Mozilla fiasco, and I hate that I don’t know how to feel about it.

I can’t talk for all gay people, but I struggle to find a balance between two biases. On one hand, obviously, there’s the tendency to take homophobia personally. But on the other hand, maybe surprisingly,  there is desensitization. After hearing about, witnessing, or experiencing a high enough number of nasty incidents, some involving maiming and death, and many involving loss of appreciable measures of wealth, prestige, sanity or health, one gets a bit… hardened.

My rule of the thumb says “switch sexual-orientation for race and adjust your outrage accordingly”. And I assure you that if I heard some CEO had paid a thousand bucks to bring back racially segregated marriages, my outrage would be clear-cut : I would call for that person’s head to roll.

So why do I hesitate (while feeling horrible for hesitating) in giving the same verdict for Brendan Eich ?

My hesitation does not come from the belief that opposing same-sex marriage is compatible with embracing LGBT people (as many marriage equality opponents would love us to believe). At least this, for me, is crystal clear : if you oppose same-sex marriage, you are asserting that in some sense same-sex relationships deserve less recognition than heterosexual ones. And that’s homophobia. If you do it for religious reasons, you do it for homophobic religious reasons. (And please, don’t even think of bringing up that stale “different but equal” bullshit : if you must be a bigot, own your bigotry.)

Neither I think that the correct frame of discussion is the one of free speech. Even if we agree in joining the acrobatic virtuosity of the US Supreme Court in stretching the definition of speech so to accept the act of financing a political movement as an act of speech, that frame still does not provide a convincing defense for Eich.

Yes, for me, freedom of speech is sacred, “I don’t agree with what you have to say, but I’ll defend to the death your right to say it”, etc., etc., et al. It implies the right to the free exercise of expressing one’s ideas, even when those ideas show one’s a complete jerk. But here’s the catch : it does not imply the right to some kind of magical protection from criticism and consequences from expressing ideas that show one’s a complete jerk. Take note of that disclaimer, kids, it’s important.

If we have to consider free speech the act of Eich donating money to prevent same-sex couples from acceding to the protections of civil marriage — with very unpleasant concrete, material consequences for those couples ; then we must,  as well, consider free speech people calling for Eich to get the hell out of Mozilla foundation, even if the concrete, material consequences for him are, well, unpleasant.

The usual oligophrenics are of course shouting “Gay Gestapo !”  I’ll only cursorily comment on the bad taste of this wording, by warning that anyone who dare to use it in my presence should expect in return an overly long unsolicited lecture about Paragraph 175. (I must also comment that when it is the turn of those hypocrites to call for people’s heads, they reserve for themselves metaphors conveying only righteousness, never censorship or violence).

So, if I am not willing to buy Eich’s antinomy that he opposes same-sex marriage all the while completely embracing LGBT people ; and if the problem here is not violating his freedom of speech ; then where does my hesitation come from ?

Let’s come back to my outrage-adjustment rule, i.e., be as outraged for an act of homophobia as you would for an equivalent act of racism. I think there are two difficulties here.

First, and obviously, not all acts of homophobia are equal, like not all acts of racism are equal. Paraphrasing the puppets of Avenue Q, everyone is a little bit homophobe, including you and me. Defending segregation is not the same as defending slavery. Opposing same-sex marriage is not the same as defending the criminalization of homosexuality. Things have gradations.

Second, and here’s the root of my discomfort : the status of what is socially acceptable or not in terms of homophobia is changing fast. This is good, of course, but it means we must be careful not to to make presentist/ethnocentric judgements. And cut some slack as different people adapt to the new mores faster than others.

If we are to judge people by their acts — and not simply judge the acts of people, and I mean if we are to jump into the exercise of estimating P(A, X), where N is the unobservable measure of a person’s niceness, A is her observable’s acts, and X is all other prior information ; then we must be very careful to ensure that X encodes the social mores of the space and time when the act was committed, and not the mores of our present space and time. Otherwise we will fall into the fallacies of presentism (time) or ethnocentrism (space). Thus, the relevant likelihood to estimate here is not P(Eich gave 1000 bucks to repeal same-sex marriage | Eich is a nice guy, California, 2014, X), but P(Eich gave 1000 bucks to repeal same-sex marriage | Eich is a nice guy, California, 2008, X) [that model assumes that niceness at his age is stationary].

Then there is forgiveness.  Societal views on homosexuality are (fortunately) evolving fast. Scant decades ago — less than my age — gay bashing in Brazil was a wholesome sport practiced by bored guys in look for a thrill. In many parts of the world it still is. Meanwhile, many countries now have hate-crime laws, non-discrimination acts, and civil same-sex marriages. In some places, the f- word has become as much as taboo as the n- word. Some people are having a hard time to adapt.

I’ll be the last one to defend the ridiculous non-sequitur that “tolerance implies tolerating the intolerant”. But, for me, accepting, provisionally and conditionally to further dialog, some intolerance is compelling in two senses : charitable and utilitarian. Charitable because no-one is perfect, everyone is little bit racist, everyone is a little bit homophobe, everyone is a jerk now and then, everyone says and does things that they will regret later, etc. The Golden Rule applies. Utilitarian because if we can engage with people and help them to cross the bridge to tolerance, in the end we gain more than if we alienate them to the fringes of radical intolerance.

So is my verdict that the boycott to Eich was wrong, and that we should had instead let him occupy the CEO position, all the while keeping the dialog with him ? It is not that simple. Unless I have missed something, Eich has never really solved the issue of whether or not he is against marriage equality today.

This is the point that really troubles be : I think that the likelihood P(Person is against same-sex marriage | Person is nice, California, TX) is becoming vanishingly small as time T progresses. However, anyone should have the right to hold — privately — any thought, even the most horrifying, even the most nauseating, without being subject to any sanctions whatsoever. If freedom of speech is sacred and untouchable, freedom of thought should be A(sacred,untouchable). As long as Eich keeps his thoughts private, for me, it makes no difference if he wants each and every LGBT person lining up to the gas chambers : thoughts are unobservable and people should not be judged by them.

But Eich has in the past acted on those thoughts. He has donated money to prevent LGBT people in California from marrying. He has donated money to Pat Buchanan’s and Ron Paul’s campaigns in the 1990s (ultimately it were those donations who gave him the coup de grâce). His poison was very much observable in the past.

Honestly, I’m glad his gone — but I’m still not sure it was the right decision to ask him to go. Which loss function should we had applied to his case ? I wish I knew.

* * *

In my first term as a Computer Sciences undergrad, my colleagues thought it would be hilarious if they changed the screen savers of all our lab computers to show in big blinking letters a phrase calling me a gay slur. This went on for weeks.

Today, it strikes me that my colleagues could decide it would be okay to play such a prank, and do it without any shame or fear of punishment ; that the faculty and  administration, aware of the deeds, could think it would be okay to do nothing about it ;  that I could think it would be okay to be mildly annoyed, shrug it off, and not lodge a formal complaint. All this happened less than 20 years ago : if it happened today, I am sure the case would make national press.

I don’t want to be draconian with Eich. If I held my alma mater and old classmates to my today’s standards, I would hate them all. And I don’t. I have scientific cooperations with my former professors. When my former classmates and I meet, we treat each other cordially. I have no means to inspect the inside of their minds to check if those people still bear animosity against gay people, but if they do, they hide it really well. If I have evolved so much in 17 years, shouldn’t I give them at least the benefit of the doubt ?

MozillaVsBrendanEichBut neither I want to be lax with Eich, lest my leniency become a betrayal of everything LGBT people have earned so arduously.

Featured image —  Money Crush by Rocky Lubbers,  Ideal Husbands by See-ming Lee.

Code Released for Visual Word Spatial Arrangement

Illustration reproduced from “Visual word spatial arrangement for image retrieval and classification”, Pattern Recognition, 2014.My former Ph.D. student Dr. Otávio Penatti (now working at Samsung Research Institute Brazil and cooperating with  RECOD in a number of projects) has released the source code for the technique implemented in our paper “Visual word spatial arrangement for image retrieval and classification”, recently published in the Pattern Recognition journal.

The code has been released under a copyleft GPLv3 license.