Project ArtEmis is training a machine to say meaningful and emotional things about artwork presented to it.
To train the algorithm, the creators collected a huge set of 439,000 human reactions to art.
From this collection, the algorithm learned to make links between the content of the image (a big pile of pixels!) and feelings, and they even produced interesting metaphors.
In the first picture, a successful example of emotion + explanation: ′′The red looks like blood,” “The woman looks sad and lonely”.
In the second picture, the researchers mentioned the emotion in advance, and the model composed a suitable explanation for it.
In the third picture, the researchers dared to show failures (in the two on the left side, it doesn’t recognize what is in the picture and in the right two, an example of a situation called Mode Collapse in which the model identifies a simple and lazy explanation that fulfills the conditions (“it makes me feel sad”).
For the project page with the article and details, and even some code that will be published soon: https://lnkd.in/dGtJFiQ
This excerpt was translated from this Facebook group: https://lnkd.in/dx5hqb2