Just this past week, a video of Facebook CEO, Mark Zuckerberg emerged on Instagram that featured a real video address with lip synced audio that refers to the power that Facebook has built using its users’ data. The dubbed voice, unlike the deepfake released of President Obama by researchers at the University of Washington in 2017, was not that of Zuckerberg. Despite its potential for being misconstrued as Zuckerberg himself, Instagram (owned by Facebook) said that they would keep the video up on the platform. There was pressure for them to do this based on their decision to keep a doctored video of House Speaker Nancy Pelosi up on the Facebook platform.
What is unique about the Zuckerberg video is the purpose behind it. This video is both a work of art created by Bill Posters as part of his Spectre art installation, and an advertisement for the AI tool, CannyAI. When I visited the CannyAI site and scrolled down, there was a section called “A new way to create.” The first example listed was “Increase Student Success: Speak the same language as your students.” CannyAI markets its tools as a way to even recycle your own footage with new audio or “tell your story in any language.” This technology could, eventually, make its way into the flipped classroom model, allowing educators to provide instructional videos in multiple languages or allowing them to reuse footage in their content so they only have to adjust the audio. According to a Vice article about the video, CannyAI looks to a time when, “each one of us could have a digital copy, a Universal Everlasting human.”
I see the benefits to these uses, and yet this technology still needs a close examination regarding privacy concerns, especially if someone’s video is doctored without their consent, or if videos of minors are edited. There are also concerns over digital footprint and privacy should, as CannyAI envisions, someone pass away but a “digital copy” of them continues to make content and “tell stories,” as the company describes. Who, then, owns the rights to our legacy? If this technology is used with students, what happens when they have a digital copy of themselves at age 15, age 18, age 21, etc…..? Lastly, what if these videos are used to spread misinformation, even in an unintentional way? Educators who make the videos may have no control over who else may dub over their work, changing the content completely.
To be clear, this technology is here and it’s not going away. Like any new disruptive technology, it will push us to consider both the positive attributes and potential of the technology as well as the way it may change the boundaries of perception, our trust of media, and exacerbate our struggle with truth in media. The best we can do is think ahead and consider how we as educators, and as a society, can leverage a tool like this while also making sure that we are thoughtful about the lasting impact this technology will have.
Pingback: Sharing Diigo Links and Resources (weekly) | Another EducatorAl Blog