Trust Me, I'm a Trekkie.
How Star Trek: The Next Generation prepared me for A.I. technology.
My obsession with Star Trek: The Next Generation prepared me for the rise of AI. 🖖🏼
Growing up, I was obsessed with the sci-fi universe created by Gene Roddenberry in the late 1960s. Although, it was the '80s reboot, Star Trek: The Next Generation, that pulled me in.
ST: TNG took me to strange new worlds weekly, energizing my active imagination. Each episode explored modern-day themes through the lens of space travel, interstellar conflict, and scientific discovery.
I learned about art, history, and culture through brilliantly executed storylines. The show also established my belief that technology has the potential to enhance our lives.
Transporters, replicators, universal translators, bio-scanners, tricorders, and holodecks all have influenced the evolution of the technology we know today.
Whenever I ask Google to play a track from Spotify, I feel like I'm twelve again--pretending to be on the Enterprise, asking the computer to play a popular Earth song from the late 20th century.
Many Trekkies know that integrating technology into society isn’t as simple as handing over the keys to a starship, saying “Live long and prosper,” and then bolting out of there at maximum warp.
Technological advances have the potential to do more harm than good, especially in a society that isn’t prepared for it.
The Prime Directive (Starfleet General Order 1) is the guiding principle of Starfleet, prohibiting interference with the natural development of an alien civilization.
The goal is to protect less-advanced (technology) civilizations from losing their shit at the sight of advanced technology.
Here is an excerpt I pulled from Wikipedia:
Section 1:
Starfleet crew will obey the following with any civilization that has not achieved a commensurate level of technological and/or societal development as described in Appendix 1.
advanced
a) No identification of self or mission. b) No interference with the social, cultural, or technological development of said planet. c) No reference to space, other worlds, or advanced civilizations. d) The exception to this is if said society has already been exposed to the concepts listed herein. However, in that instance, section 2 applies.
Section 2:
If said species has achieved the commensurate level of technological and/or societal development as described in Appendix 1, or has been exposed to the concepts listed in section 1, no Starfleet crew person will engage with said society or species without first gathering extensive information on the specific traditions, laws, and culture of that species civilization. Then Starfleet crew will obey the following.
a) If engaged with diplomatic relations with said culture, will stay within the confines of said culture's restrictions. b) No interference with the social development of said planet.
Interesting Factoid: The Prime Directive reflected a contemporary political view that US involvement in the Vietnam War was an example of a superpower interfering in the natural development of Southeast Asian society; the creation of the Prime Directive was perceived as a repudiation of that involvement.
I often wonder if those at the forefront of developing new technology, especially AI, pause and consider the downstream implications of what is being created.
Do they take the time to reflect on the answers to these questions before moving forward at warp speed?
- What are our intentions?
- Have we considered the broad, sweeping implications?
- Did we engage different perspectives across demographics for input?
- Is society prepared for it?
- What would they need to learn to utilize it fully?
Take a minute and imagine if someone was given a chainsaw–they've never seen one–and then left to figure out what the hell it was for on their own. There's a big chance someone is getting hurt, right?
As consumers of new technology--physical products (iPhone), online platforms (Facebook, TikTok), or AI-enabled applications (ChatGPT, Google Bard, Perplexity)--we should approach it with curiosity and a healthy amount of skepticism.
The rapid advancement and accessibility of AI tools have opened the floodgates to a deluge of AI-generated content.
AI tools in the hands of bad actors mean we officially can no longer believe anything on the internet--at least at first sight.
Slowing down to allow for curiosity and questioning will help us more than any new app or AI-generated label.
The advancements in AI and machine learning remind us that technology can be used to improve our lives. History reminds us, however, that times of rapid innovation and automation come with disruption, but at whose expense?
For instance, pervasive social media algorithms can lead to echo chambers, exacerbating social divides. Bots easily shift how we navigate the internet, and generating images or text containing dis-/misinformation is easier than ever.
Facial recognition technology–used to unlock your phone, swap faces in a silly app, or identify a potential suspect of a crime in video footage–raises ethical concerns about individual privacy and racial bias, which many social justice and AI experts have been sounding the alarm on long before ChatGPT became a household name.
We are the Humans in the Loop
We cannot blame technology for its negative consequences. It is simply a tool doing what it was designed to do.
However, a very real human is responsible for the design, engineering, and deployment of these technologies, and they absolutely deserve our attention and our critiques.
Is AI taking jobs, or do CEOs and corporations see AI automation as a way of saving costs?
We must correctly identify the actual humans behind the curtain to ensure they are held accountable for their work.
We must shift the focus to the real hero or villain in the story. And we should stop anthropomorphizing AI – which I have been guilty of.
The name of this publication, “A Human in the Loop,” comes from the world of AI and machine learning, embodying the crucial role of human involvement in guiding technology’s evolution.
Human intervention in the development of these technologies is vital. Human input should include ethical oversight, diverse representation in development teams, and public discourse on technology’s societal impact. These practices ensure that technology serves humanity’s broadest interests.
I fear that our fatigue-induced apathy and ignorance of technology, exacerbated by the departure of critical thinking, have put us in the backseat of a runaway self-driving car with no clue how to get back in the driver’s seat.
And the supercomputers in our pockets—designed to connect us to the world and people around us—have stolen our attention, oversaturating us with content to the point that disengaging has become an almost impossible rebellious act of self-care.
To navigate our technological future, we can turn concern into action by advocating for transparent AI development processes, supporting legislation that protects individuals from technology’s potential harms, and participating in community initiatives that educate and empower users about their digital rights and responsibilities.
First, you have to start by asking yourself:
- Where are you along the spectrum of digital wiz to analog purist?
- How do you feel about that?
- Are you content or anxious about the wave of AI advancements and access?
The current AI trajectory may not be clearly defined. Understanding basic skills like writing effective prompts and automating work is easier than you think. The best way to learn is by doing.
The real work will require continual effort, creativity, and active engagement.
If you want to shape the technology that shapes our lives, these skills will be critical on the path ahead.
Live long and prosper. 🖖🏼
Thank you for taking the time to read my work!
Show your support for my work by buying me a coffee or becoming a paid subscriber.