News

Meta Releases Lying, Offensive AI and Pretends to Be Surprised

Like trouble, bad behavior by Meta shows up whether you look for it or not.  The latest is an open-source language model that was supposed to provide reliable search results because it was trained on academic papers.  Alas, it was quickly withdrawn after reviewers found that it returned results that were grammatical and plausible but also incorrect, not to mention filled with “antisemitism, homophobia, and misogyny.”  How can this be a surprise?