Fire officials in the N.W.T. are warning about misinformation and misleading images circulating online about the territory's wildfires, and urging residents to rely on official sources and local media for emergency information.
Shortly after Fort Providence, N.W.T., was evacuated on Sunday due to a nearby wildfire, an AI-generated image began to circulate on a post about Fort Providence. The image appeared to show flames approaching houses in the community.
N.W.T. Fire blasted the online post, saying on its own page that the AI-generated image "does not reflect current conditions in Fort Providence," and called the post "sensationalized slop."
N.W.T. fire information officer Mike Westwick said such posts are misleading and dangerous.
"We thought as an organization it was important to point out that it was misinformation, that it was dangerously inaccurate, and to also just make a point about media literacy during disasters," said Westwick.
The image, which has since been taken down, was posted to a Facebook page with 70,000 followers and was shared more than 400 times. It also generated more than 200 comments, some of them warning others of the fake image, while others reacted as if the image were real.
Westwick says it doesn't take long for AI content to start flowing when emergencies hit and that not everyone is used to sorting through false information on social media.
He says the Fort Providence AI image wasn't the only social media incident they've been made aware of since the evacuation. On Sunday morning, a Facebook page called the Hotshot Wake Up posted an older video of an N.W.T. wildfire, along with a description of the Fort Providence evacuation.
Westwick says such posts can also be misleading.
"Certainly it wasn't helpful at the time because it made it seem like that fire was behaving in a way that it just wasn't at that time," he said.
Westwick says residents should seek official information whenever possible. He pointed to the N.W.T. Fire website, which is regularly updated, as well as its Facebook page. He also says fire officials communicate with local media, including radio and TV.
Earlier this month, the B.C. Wildfire Service also sounded the alarm on a rise in AI-generated wildfire images in that province, which it said were contributing to online misinformation and exacerbating stressful situations.
'Just traffic to their website'
Vered Shwartz, a Canadian Institute for Advanced Research (CIFAR) AI chair at the Vector Institute and assistant professor of computer science at the University of British Columbia, says fake images are often created and shared online for financial benefits.
"Why would someone generate fake images of something real that is happening, and what do they stand to gain from that?" Schwartz asked. "I think if it's not immediately clear what they have to gain from that, it's probably just traffic to their website."
Shwartz says the best way to sort out the real from the fake is to go to official sources for information. She says while detection tools exist for AI images, those also can't be entirely trusted.
"These AI models, what they're doing is exactly trying to mimic the statistics of the real world data, the real images. And so, the better they're getting, the harder it is for detection tools to say that it's actually different from a real image."
Fake content harmful for those experiencing disasters, says expert
Maleknaz Nayebi, an associate professor of computer science and engineering at York University, says fake content can be emotionally damaging as well as confusing.
"The illusion that it creates for many people…Sometimes it's pretty damaging, because people would think that, you know, the fire is close to them or close to their loved ones or close to their houses," she said.
Nayebi says sometimes people can share AI-generated posts and images with good intentions, not realizing the harm they can cause.
"People just don't anticipate what type of problems these AI images might have."
She says it is difficult to put all the responsibility on the public to decipher AI-generated content. She suggests the creation of evidence- and research-based fact-checking platforms for AI images.
"Research institutes, governments, can work on that, and then there should be very clear legal consequences that are being executed at the same time," she said.