EU countries want to share photos for facial database • The Register

In short The European Commission wants to create a giant facial recognition database that will be shared with law enforcement in different countries.

Police in Europe have been able to share details such as fingerprints and DNA in criminal investigations for the past 15 years under the Prüm II policy, and now lawmakers are trying to expand this to support recognition facial.

The latest documents, seen by Wired, show just how much this potential database could grow. Countries capture all kinds of images of their citizens. For example, Hungary has 30 million photos, Italy 17 million, France 6 million and Germany 5.5 million. These official photographs range from criminal suspects to asylum seekers.

Experts are increasingly pushing back against the proposal launched late last year. “What you are creating is the most extensive biometric surveillance infrastructure I think we have ever seen in the world,” said Ella Jakubowska, policy adviser at civil rights NGO European Digital Rights.

An EU spokesperson, however, said that “only facial images of suspects or convicted criminals can be exchanged”, under Prüm II. “There will be no matching of facial images to the general population.

A new Internet language ‘algospeak’ is emerging

Internet users are inventing new words to circumvent content moderation algorithms within the framework of a language called “algospeak”.

Posts on social media platforms like TikTok or Instagram can be automatically deleted if they contain toxic or NSFW content. In an effort to circumvent these rules, people replace banned words with words the algorithms don’t recognize so their photos or videos won’t be removed from the internet, according to the Washington Post.

Some common examples include saying “not alive” instead of “dead”, “SA” instead of “sexual assault”, or “spicy eggplant” for “vibrator” or “nip nops” instead of “nipples” .

“The reality is that tech companies have been using automated tools to moderate content for a very long time and although it’s presented as this sophisticated machine learning, it’s often just a list of words they think are problematic,” said Ángel Díaz, a senior lecturer at UCLA Law School who studies technology and racial discrimination.

These messages said in algospeak are not always necessarily offensive, sometimes people use made up phrases to talk about sensitive topics like mental health or sexuality.

Automated radiology tool gets EU green light

An AI-powered tool that automatically flags healthy-looking X-rays has been endorsed by the European Union, paving the way for the software to be used in real-life clinical settings in 32 countries.

ChestLink has received CE Class IIb certification, according to its creators Oxipit. “It aims to address the shortage of radiologists and their increasing workload,” said company spokesman Mantas Mikšys. The register.

ChestLink saves reports if it detects no problems in x-rays, such as nodules lodged in a patient’s lungs.

“Even if the patient is healthy, the radiologist still has to complete the report. This is a mundane and routine task. Even with current automation, ChestLink autonomously reports 15 to 40 % of the daily workflow. It just removes these studies from the radiologist’s daily workload. A radiologist can spend more time analyzing images with pathologies,” Mikšys added.

Now that ChestLink has been cleared for use, clinicians can use it in the real world and focus on providing better care to patients who need it.

It is the first regulatory-approved AI medical imaging tool that can operate autonomously in the EU. The company plans to start rolling out its software to hospitals next year.

Can AI tell us how we feel when hearing our voices?

Researchers are experimenting with AI systems to analyze people’s voices to try to identify psychiatric disorders, such as depression or schizophrenia.

But does the technology really work? There is evidence that people with depression or anxiety tend to speak in a more monotonous, calm, or rapid manner, Maria Espinola, a psychologist and assistant professor at the University of Cincinnati College of Medicine, told The New York Times. . .

Diagnosing and treating mental illnesses is tricky and requires careful analysis that goes beyond just hearing how someone is speaking. Still, researchers are trying to see if AI may be able to help because it might detect signs that our own ears might not be able to pick up. The idea raises questions, however, especially if the technology is poorly interpretable and susceptible to bias.

“For machine learning models to work well, you really need to have a very large, diverse, and robust data set,” said Grace Chang, founder of Kintsugi, a startup that has developed an app to track states. emotions of users by listening to their voice.

Datasets should be representative of people of different ethnicities, ages, and genders, which is often lacking in medical datasets.

Whether machine learning can or should analyze voices to accurately study mental health remains debatable. ®