icon

Croatian community discusses combating Russian disinformation

#DiasporaNews
December 13,2024 217
Croatian community discusses combating Russian disinformation

The Ukrainian House in Zagreb hosted two seminars with Sonia Dymytrova-Martyniuk, an expert in debunking fake news from the Kyiv-based StopFake project, which was established in 2014. The events were organized through the joint efforts of the StopFake team, the Ukrainian community in Zagreb, and the Embassy of Ukraine in Croatia.

The seminars focused on countering disinformation in Central Europe, identifying markers of informational threats, and utilizing digital technologies and social platforms to counter disinformation.

Ksenia Meleshko, a representative of the Ukrainian community, compiled a brief report on the fight against Russian disinformation based on the discussions. The Ukrainian World Congress is publishing an abridged version of the text.

Fact-checking has become an essential part of our daily lives, not just in Ukraine, which is suffering from Russia’s brutal military aggression and information warfare, but also for Ukrainians abroad. The goal of this information war is to demoralize Ukrainians, no matter where they are, and undermine the unity of countries and their support for Ukraine. The enemy actively uses social networks, specially crafted online pages and channels, and modern technologies powered by artificial intelligence (AI) to spread disinformation.

“Fighting fake news is not just an issue for Ukraine but for all of Europe,” said Sonya Sonia Dymytrova-Martyniuk, an expert with the StopFake project. “As our conversations reveal, people abroad are often unaware of the true situation in Ukraine.”

According to research on Russian disinformation that targets the EU, prior to the full-scale invasion of Ukraine, fake news about Ukrainian refugees accounted for 2.7 percent of all disinformation. After Feb. 24, 2022, this number grew sixfold, reaching 15 percent of the total volume of disinformation.

Today, 27.3 percent of fake news is aimed at the EU, with Russia focusing particularly on audiences in Poland, Germany, Britain, France, and Italy. Analyzing fake news helps us predict future developments. It’s crucial to notice even the smallest details. For example, the appearance of numerous microphones at a briefing by Eduard Basurin, the unofficial spokesperson for the Kremlin-installed “Donetsk People’s Republic,” was an early sign of preparations for a Russian attack in the coming days.

Participants of a seminar.

Unfortunately, many media outlets continue to repeat news releases by Russia’s Federal Security Service. Even reputable sources sometimes fail to identify fake news. For instance, French war correspondent Adrien Boke was widely invited to media appearances after claiming he visited Bucha in mid-April 2022, where he allegedly spoke with Azov fighters and witnessed killings. 

However, our journalist confirmed with the Ukrainian State Border Guard Service that Boke was in Ukraine only twice, on April 4 and 6, 2022 for one day and half a day. Given the situation at the time, it was physically impossible to reach Bucha, as all routes were blocked. Despite this, French media continued to feature Boke’s stories until “Libération” conducted their own investigation. They discovered that while Boke was indeed injured, it wasn’t under the circumstances he had claimed. It turned out that he was known for fabricating stories, and within a year, he had obtained Russian citizenship.

Russian bots are also analyzing the behavior of those who engage with fake news, more precisely targeting their audiences. These systems can determine the best times to post, the frequency of posts, and the strategies for spreading disinformation. This is why it’s so important to avoid downloading files from suspicious sources, participating in surveys on dubious websites, or trusting unverified emails. Always rely on information from official and trusted sources.

To verify sources, journalists often use Osavul, a special program developed by Ukrainian IT specialists – something of which we can all be proud.

How to identify AI-generated content

Today, many videos, audio recordings, and images generated by AI are used to demoralize Ukrainians. It is crucial to learn how to identify deepfakes – distinguishing what is real from what has been manipulated by the enemy. A deepfake is a photo or video created using artificial intelligence with the intention to deceive the viewer.

Sonia Dymytrovа-Martyniuk.

“Suspicious images often display unnatural colors, lack shadows, or have shadows that don’t align with the light source,” explains Dymytrovа-Martyniuk. “The programs used for disinformation are not perfect, so you might notice strange details, such as people with six fingers, missing limbs, or medals that don’t exist. It’s essential to pay close attention to the details in an image.”

When analyzing suspicious photos, it’s important to focus on:

  • Unnatural characteristics: AI-generated images often have a distinct shine and overly saturated, sharp colors that lack natural variations, inconsistent lighting, and shadows that do not align with the light source.
  • Hands: Since AI datasets mainly focus on human faces, creating human fingers often presents problems. While this flaw will likely be resolved over time, it’s important to always check the fingers of a person in the image when visible.
  • Blurry background: A suspicious or blurry background, or mismatched backgrounds created from unrelated images, should raise red flags. For example, lines that don’t align, furniture that appears too large for its real size, or uneven lighting (such as lamps of different sizes).
  • Hair: AI-generated hair often looks too thick, sometimes floating unnaturally above the person’s head, or lacks natural gradients and textures.
  • Accessories: Watches may be unnaturally placed on the body, and jewelry may appear to “float” away from the body.

Deepfakes are created using tools like Deepfakes Web, Reface, and FaceApp. To verify such content, you can use tools like Synth ID by Google, IS IT AI, AI or Not, Deepware, Google Image Search, metadata viewers like ExifTool and Exif Pilot, image scaling tools like AI Photo Zoomer and VanceAI Image Upscaler, and InVID.

Four main rules for verifying information

  • Examine who the author of the information is, what their intent might be, and the underlying basis for the message. It’s helpful to check other articles or statements by the same author.  
  • Verify whether other reputable sources are sharing similar information. If only one source is reporting it, question where they obtained this exclusive information that others haven’t found.  
  • Regardless of the source’s stature, always check the original source.  
  • Be mindful of the date of the news: sometimes, old events or statements are presented as if they occurred or were made recently.

Signs of fake news

When reading the news, the first thing to watch out for is the headline. It might be misleading in how it presents the information or overly emotional. Unfortunately, if a fake story catches the audience’s attention, it tends to be used repeatedly.  

  • The headline doesn’t match the content of the article, and the text is about something completely different.  
  • The news relies on questionable or anonymous sources instead of providing specific references.  
  • The article is illustrated with dubious photos and videos.  
  • The information is filled with excessive emotions, exclamation marks, or question marks.  
  • The news quotes biased or untrustworthy experts.  

Dymytrova-Martyniuk stressed the importance of debunking fake news as quickly as possible. She also reminded us of Mark Twain’s famous words: “A lie can travel halfway around the world while the truth is still putting on its shoes.”

Donate Subscribe to our news