The Looming Threat of “Liar’s Dividend” in the Era of Deep Fakes
As crucial elections, including the US election, approach, the rise of deep fake misinformation introduces a concerning phenomenon known as the “liar’s dividend.” Deep fakes, AI-generated realistic content, not only distort the truth but also empower individuals to brandish authentic content as fake.
Politicians globally are increasingly using AI as a scapegoat, dismissing controversial videos and leaked recordings as potentially AI-generated.
The destabilisation of truth by AI poses a significant challenge, creating a landscape where discerning real from fake becomes increasingly complex. Efforts to curb deep fakes have been shallow, with tech companies exploring verification methods, but even experts can falter in reliably distinguishing between authentic and manipulated content, raising the spectre of widespread misinformation chaos.
Chicago Developers Empower Artists to Combat Unethical Data Practices with Nightshade
Today is the day. Nightshade v1.0 is ready. Performance tuning is done, UI fixes are done.
You can download Nightshade v1.0 fromhttps://t.co/knwLJSRrRh
Please read the what-is page and also the User’s Guide on how to run Nightshade. It is a bit more involved than Glaze
— Glaze at UChicago (@TheGlazeProject) January 19, 2024
A group of Chicago-based developers has unveiled Nightshade, an innovative tool empowering artists to protect their digital artwork from unauthorised use in AI training.
Nightshade introduces ‘poison’ samples imperceptible to the human eye but disruptive to an AI’s learning process, leading to incorrect associations and responses. As more ‘poisoned’ images infiltrate a dataset, the tool progressively undermines the model’s performance.
Nightshade complements the University of Chicago’s Glaze, a previous creation aiding artists against data scraping by altering colours and brush strokes for a different artistic style. Unlike Glaze, Nightshade takes an offensive stance, strategically disrupting AI training datasets to thwart unethical practices.
University of Sheffield Develops CognoSpeak AI Tool for Early Dementia Diagnosis
Researchers @DBlackburnShef & @christensendkuk from the @sheffielduni have developed an AI tool that promises faster #dementia diagnosis.
The tool #CognoSpeak uses advanced #AI to analyze speech patterns for early signs of #Alzheimer’s #Healthcare https://t.co/177279O03n
— DailyAI (@DailyAIOfficial) January 24, 2024
Researchers from the University of Sheffield, UK, have introduced CognoSpeak, an AI tool aimed at expediting the diagnosis of early signs of dementia and Alzheimer’s disease.
Supported by the NHS and the National Institute for Health and Care Research, CognoSpeak utilises AI and speech technology to analyse language and speech patterns during virtual patient interactions.
Early trials indicate its accuracy rivals traditional pen-and-paper assessments in predicting Alzheimer’s, with an 86.7% sensitivity in distinguishing between cognitive disorders. CognoSpeak is currently undergoing broader trials with 700 participants from UK memory clinics, offering the potential to reduce waiting times, start treatments sooner, and transform dementia diagnosis.
AI continues to show promise in supporting neurological health, with speech analysis playing a crucial role in early detection.