When Mary Shelley wrote her novel “Frankenstein,” one of the themes of the book was the danger presented by the combination of new technology and the desire to test the boundaries of science without regard to moral consequences. Dr. Frankenstein’s drive for knowledge led to an Icaran fall from grace; his efforts to play God resulted in the creation instead of a “fallen angel” that brought him only despair and sorrow.
Shelley was writing during the latter days of Industrial Age, when science had been advancing at an exponential rate and the discovery of things like electricity led to massive social changes. While they raised the quality of life, these changes also produced some psychological malaise about the consequences and risks of so much change; hence the book. Today we face a similar nexus of technology and morals, in part in the form of artificial intelligence (AI). While AI has limitless potential to do good, it also has a dark underside that could lead to negative consequences if used unscrupulously.
But let’s go back a minute to where this all starts. This year, Stanford researchers Michal Kosinski and Yilun Wang trained a very basic AI neural network to recognize sexual orientation in humans using 35,326 pictures of 14,776 men and women. Put another way, they taught a computer gaydar by showing it pictures of gay and straight faces. Not only did the AI learn gaydar, it learned it fantastically well. Human control subjects in the study guessed correctly if photos were of gay or straight men 61% of the time, with a decline to 54% accuracy for lesbians and straight women (basically chance). The AI, on the other hand, was 81% accurate for men, and 74% accurate for women. The AI’s accuracy improved to 91% and 83% respectively when the algorithm was exposed to five images of the same person (under certain conditions). Wow.
The study has interesting implications. First of all, the study found that gay men and women tend to have “gender atypical” features, expressions, and grooming styles. No one would be surprised to find out that lesbians are less likely to be wearing make-up and have carefully manicured eyebrows than their heterosexual peers and are more likely to be wearing beanies or baseball hats and have faux-hawks (if we’re going to go by stereotypes), but the actual science is a little bit more sophisticated than that. The AI learned to analyze data in an extensive dataset looking for visual patterns, and one of those patterns was that gay men may have tended to have narrower jaws, longer noses and larger foreheads than straight men, and that gay women may have tended to have larger jaws and smaller foreheads compared to straight women. Um, and also, lesbians were more likely to wear baseball hats in their profile pictures.
Based on these physical features, the study authors hypothesized that sexual orientation may have a hormonal/developmental basis: as fetuses develop in the womb, they are exposed to various levels of hormones such as testosterone. These hormones are known to play a role in developing facial structures, and therefore may be involved in determining sexuality as well. Examining the AI’s results, the AI weighted the nose, eyes, eyebrows, cheeks, hairline and chin most heavily for determining male sexuality, and the nose, mouth corners, hair and neckline for women (which I suppose means that if you’re trying to use your own gaydar, you should focus on a woman’s mouth and nose).
I’m biased, but the lesbian looks prettier, right?
A second implication has to do with self-identification, something not considered by the study. Online dating applications are potentially a treasure trove of information for researchers to use in various sexuality studies (although OKCupid, Match.com, eHarmony, and Plenty of Fish indicate that scraping or using the sites’ data for research is prohibited by the various Terms of Service, so it’s unclear whether this particular study was done surreptitiously on one of those sites) and for good reason.
For this particular study, rather than having to identify almost 15,000 heterosexual and homosexual study participants, the researchers had only to hit a few buttons and there was a ready-made pool of subjects that had pre-selected their sexual orientation. However, the study couldn’t control for sexual fluidity. The study only presented the AI a binary option of gay/straight, when in fact some people might be somewhere along the sexual spectrum.
If an individual was tagged as gay but self-identified on the dating site as straight, who’s to say that individual won’t turn out to be a late in life gay, for example? Or perhaps they’re bisexual but were looking for a specific gender to date. It has also been speculated that the theoretical sexual spectrum is one reason the AI’s accuracy rate for women was so low: if women are significantly more sexually fluid than men, it stands to reason that the AI would struggle to only place them in binary categories.
As AfterEllen has written before, studies since the 1990s have indicated that people look for other non-gender normative cues such as style and fit, jewelry, posture, body type, walk or gait, and both the types and frequencies of gestures when exercising gaydar, so one could only imagine how accurate the AI would be with additional inputs. However, this is where we start getting to Frankenstein’s monster. After all, now that we know AI can be used to tell who is gay and who is straight with pretty darn good accuracy, what’s to prevent me from applying the same algorithm to all my favorite actresses and figuring out who I need to woo with flowers and my winning good humor? (A somewhat benign application.) What’s to stop companies from screening out gay applicants, and homophobic governments from rounding up people with “gayface”? (A definitely malignant application.) It is all too easy to imagine a world in which cruel high school teenagers pre-screen their classmates and know exactly who to pick on.
In fact, the main point of the study is to show that this facial recognition technology exists already off the shelf, so governments and companies need to immediately consider ways to safeguard individual privacy and regulate the use of facial analysis information and research. Unfortunately, democratic governments in particular seem to be especially bad at this type of thing (remember how it took Upton Sinclair’s “The Jungle” to convince the US Government that hotdogs shouldn’t be allowed to be made from rat poop and sawdust off the warehouse floor?).
So while one day soon you might be able to walk into a bar and ask Siri where the lesbians are, making an AI your wingwoman (by then, AI will also be able to judge based on pupil dilation if she’s into you and search her social media to see whether you’re Facebook compatible), we should think long and hard first about whether this technology will be a fun social boon, or a dangerous tool to be wielded in homophobic hands. After all, I’m all for taking a selfie to see how gay my phone thinks I am, but I’d rather that not be used to determine my access to healthcare and voting rights.