Virtual Teacher
RedFlag Header

Design & Concept by Cathy Brown  AI Image Night Cafe

AI GOOD or EViL

Summary

Technological progress has been consistently strangled by fear-driven regulations. From the early days of the telegraph to today’s AI frameworks, regulations have been built on worst-case scenarios, designed by those with little practical understanding and a lot to lose. Sound familiar? We’re doing it again with AI.

Vested interests and outdated thinking have historically stifled innovation, which has clear parallels to today's AI regulations that could kill progress before it even begins. 

Introduction

Often written by academics with little to no hands-on experience with AI,
these frameworks position their authors as authorities on the subject. Their lack of practical exposure leads them to focus on theoretical risks and hypothetical worst-case scenarios, rather than the actual opportunities AI presents. These frameworks offer no advice on application or implementation. Instead, they serve to preserve the roles of those creating them and maintain a sense of relevance, rather than fostering meaningful progress.

In many cases, these academics use fear-driven language,
invoking concerns over privacy, ethics, safety, cheating, or litigation—to justify restrictive measures. The result is a set of guidelines that slow innovation and misalign with the real-world applications of AI. These frameworks are often devoid of factual evidence or practical insights, relying instead on perceived risks that rarely materialize as predicted. These regulations often lagged behind the rapid pace of technological advancement, becoming obsolete or unnecessarily restrictive almost immediately.

This is not a new phenomenon.
Throughout history, those with limited understanding of emerging technologies,
and vested interests in retaining power or protecting established practices,
have written regulations based on exaggerated fears.

These frameworks quickly became obsolete or restrictive.
Like outdated telegraph laws or overly cautious internet regulations,
today’s AI frameworks risk stifling progress rather than guiding it,
all because they are written by individuals & organisations more focused on retaining their influence, power, control, protecting established practices, their commercial interests or their job, than embracing the future of technology.

Download and Listen to the Podcast

A  Notable Example

The 'Red Flag' Law and Automobiles (1865-1896)

One of the clearest historical parallels is the Red Flag Law in the UK, which was imposed on early automobiles.

The law required a person to walk in front of each car waving a red flag to warn pedestrians of its approach.
The law was based on fears that automobiles were inherently dangerous to society.
This regulation became almost immediately obsolete and stifled innovation,
as the anticipated dangers never materialized to the degree expected.

Vested Interests – Who Proposed Them?

The legislation was pushed by those with vested interests in maintaining the status quo, such as the horse-drawn carriage industry, which saw the rise of automobiles as a threat to their livelihood.

These interests prioritized their economic protection over fostering open innovation and experimentation in automotive technology, inhibiting progress and delaying the practical and widespread adoption of automobiles. These regulations often lagged behind the rapid pace of technological advancement, becoming obsolete or unnecessarily restrictive almost immediately.

Read the full White Paper:

MOMENTS IN HISTORY Where Technology Frameworks or Regulations
Were Created Based on PERCEIVED ISSUES that Later Proved to Be EXAGGERATED or
Misaligned with the Actual Development of Technology

by Cathy Brown October 11th 2024


© Cathy Brown 1998 - 2024 © All images & Videos Cathy Brown Located in Sydney NSW Australia all rights reserved.
No unauthorised reproduction without written permission. Webmaster & Designer - Cathy L. Brown

Determine whether your
system / project should
use the AIAF.
big ideas-logo1
maths hammersf

Virtual Teacher is committed to ensuring that our AI systems & assistants are used responsibly & ethically. Our AI is designed to support educators & students by providing personalized learning experiences, enhancing engagement & promoting understanding. We prioritize the safety, privacy, & security of our users, ensuring that our AI tools operate transparently & align with the best practices in the industry.
The NSW AI AssessmentFramework requires self assessment to deterimin Determine whether your system / project should use the AIAF. All AI projects used by Virtual TEacher are Low Risk or No Risk applications. Check out the Risk Evaluation page attached.

Quick Message