Lessons from 1980’s Car Removal Programs: Understanding Requirements for Effective Solutions

Lessons from 1980’s Car Removal Programs: Understanding Requirements for Effective Solutions

[Ed. note: Reflecting on past initiatives helps us prepare for future challenges. We revisit lessons from the 1980s to inform today’s complex projects as we gear up for the new year.]

Amidst discussions about the transformative power of Artificial Intelligence, concerns arise about AI potentially overshadowing human expertise. Some speculate that AI could directly address business needs, bypassing professionals. However, drawing from experiences in complex projects like the ambitious 1980’s car removal programs, we see a different picture. Having navigated the intricacies of problem-solving for years, the notion of simply asking AI to solve complex issues seems overly simplistic.

While implementing solutions can be intricate, the real challenge often lies in clearly defining the problem itself. In the context of 1980’s car removal programs, the difficulty wasn’t just the physical removal of vehicles, but understanding which cars to remove, from where, and under what conditions. Similarly, in software and AI, the core challenge isn’t coding or algorithms, but defining precise requirements. These requirements, much like the policies and procedures of a car removal program, are fundamentally human-defined.

This article explores the crucial relationship between well-defined requirements and successful outcomes, drawing parallels from the experiences of 1980’s car removal programs and applying these lessons to the realm of AI and software development. We will examine what it truly takes for any system, AI or otherwise, to yield effective results, emphasizing the indispensable role of human clarity and foresight.

It’s not a glitch, it’s policy… no wait, it’s a glitch in the policy

Early in my career, I was involved in a project aiming to streamline city services, much like the efforts seen in the 1980’s car removal programs. The goal was to create a system for managing urban spaces more efficiently.

My task involved designing a process for handling vehicle removal requests. This required accounting for various conditions: vehicle type, location, and legal status within different city zones. Similar to configuring software for e-commerce, this involved dynamic rules and conditional logic.

At one point, I identified a potential flaw. The system allowed for a vehicle removal request to be initiated under one set of criteria, but later in the process, those criteria could be overridden. For example, a car initially flagged for removal based on parking violations could be later exempted due to a different, less critical status update. This contradicted a core principle outlined in the program’s initial policy document – a document signed off by city officials and stakeholders.

I questioned a senior program manager, “Shouldn’t we prevent the system from allowing overrides that contradict the initial removal criteria?” The response I received was strikingly confident:

“That situation will never occur.”

This manager, deeply experienced with city operations and instrumental in shaping the car removal program, was certain of the operational flow. The override capability was intentionally included, supposedly for exceptional cases he believed would be exceedingly rare. Who was I, a junior analyst, to challenge the judgment of a senior official guiding this significant public initiative? I accepted his assurance and moved on.

Months later, shortly before the program’s public launch, a field test revealed an issue. A scenario I had flagged – overriding the initial removal criteria – was indeed happening. Vehicles that should have been removed were being retained in the system due to inconsistent application of the rules. Guess who was tasked with resolving this? And guess who was initially correct about the potential problem?

The technical fix was straightforward, and the immediate impact was contained. However, this experience became a recurring lesson throughout my career, echoing the challenges faced in the 1980’s car removal programs and beyond. Talking with colleagues in urban planning and public policy, I realized this wasn’t an isolated incident. The scale and complexity of issues varied, but the root cause was often the same: unclear, inconsistent, or flawed initial requirements and policies.

Image: A flowchart illustrating decision-making processes, relevant to both software logic and policy implementation in programs like 1980’s car removal initiatives.

AI Capabilities: From Chess to Complex Urban Environments

Artificial intelligence has been a concept for decades, with recent advancements sparking both excitement and apprehension. AI’s application in chess, dating back to the 1980s, demonstrates its prowess in rule-based systems. AI now consistently outperforms humans in chess, unsurprising given chess’s finite parameters and clearly defined rules.

Chess starts with a fixed set of pieces on a defined board, governed by universally accepted rules, and a singular objective: checkmate. Each turn offers a finite number of moves. Chess AI operates within a closed rules engine, calculating move repercussions to optimize for piece capture, positional advantage, and ultimately, victory.

Another area of AI focus is autonomous vehicles. While promised for years, truly self-driving cars with complete autonomy remain elusive. Current systems often require driver oversight, their self-driving capabilities not fully independent.

Similar to chess AI, self-driving cars rely heavily on rules-based engines. However, unlike chess, the rules for navigating real-world driving scenarios are vastly more complex and less defined. Drivers constantly make nuanced judgments – avoiding pedestrians, maneuvering around obstacles, navigating intersections. These judgments are critical, distinguishing between safe arrival and accidents. This complexity mirrors the challenges faced in large-scale initiatives like 1980’s car removal programs, where unforeseen variables and exceptions constantly arose.

In technology, high availability is paramount. “Five nines” or “six nines” availability—99.999% or 99.9999% uptime—is often the goal. Achieving the first 99% is relatively straightforward, allowing for significant downtime. However, each subsequent “9” dramatically increases the complexity and cost of achieving it. Reaching 99.9999% requires minimizing downtime to mere seconds annually, demanding exponentially greater planning and resources.

Availability Percentage Downtime per Year
99% 87.6 hours
99.9% 8.76 hours
99.99% Less than 1 hour (52 minutes)
99.999% 5.2 minutes
99.9999% Roughly 31.5 seconds

Even with advanced AI, achieving near-perfect safety in self-driving cars, or flawless execution in programs like 1980’s car removal initiatives, is incredibly challenging due to the ever-present risk of unforeseen events. While human drivers also cause accidents, public and regulatory expectations for AI-driven systems are likely to demand safety levels at least as good as, if not exceeding, human performance.

The difficulty in achieving this safety level stems from the infinite variables in driving, far exceeding the finite possibilities in chess. The initial 95% or 99% of driving scenarios might be predictable. However, the remaining edge cases are numerous and unique: interactions with other drivers, road conditions, construction, weather, unexpected obstacles. AI systems struggle to anticipate and appropriately respond to these anomalies, mirroring the policy exceptions and unforeseen situations that complicated 1980’s car removal programs. Each situation shares similarities but is rarely identical, making it hard for AI to generalize and react perfectly.

AI Can Generate Code, Not Necessarily Solutions

Developing and managing software, or implementing large-scale programs like 1980’s car removal initiatives, is more akin to driving than playing chess. These endeavors involve countless variables and require nuanced judgment. While there’s a desired outcome, it’s rarely as singular as winning a chess game. Software and policy are rarely “done”; they evolve with new features, bug fixes, and changing needs. Unlike chess, these are ongoing processes.

In software development, technical specifications aim to create a more controlled environment, similar to chess’s rules engine. Ideally, specs detail user interactions and system flows: “user clicks button, system creates data structure, service executes.” However, real-world specifications are often less precise – wish lists, napkin sketches, vague requirements documents, leaving much to interpretation.

Worse, requirements change or are disregarded, as illustrated in the earlier anecdote about the car removal program policy override. Consider a more recent example: developing a system to provide COVID-19 health information in areas with limited internet access. The idea was to use SMS surveys – text messages – to gather data. Initially promising, deeper analysis revealed significant challenges.

While simple retail surveys via SMS are feasible (“rate your shopping experience 1-10”), complex, multi-step health surveys with multiple-choice questions pose significant data handling challenges. What if responses are incorrectly formatted? How to manage invalid input? These questions, analogous to the policy exceptions in 1980’s car removal programs, led to a crucial realization.

After detailed consideration, the team decided to halt the SMS survey project. This was a successful outcome. Proceeding without clear solutions for potential data errors would have been wasteful and ineffective. This mirrors situations in 1980’s car removal programs where poorly defined procedures led to inefficiencies and misallocation of resources.

Can AI effectively create software or manage complex programs simply by stakeholders directly interacting with it? Will AI proactively ask crucial questions about handling data input errors in an SMS survey, or anticipate policy exceptions in a car removal program? Will it account for human error and system missteps?

For AI to produce functional solutions, clear, precise requirements are essential. Even experienced developers often uncover unforeseen complexities only when they begin implementation. This is akin to discovering unexpected challenges when executing a car removal program – resistance from vehicle owners, logistical hurdles, or unforeseen regulatory issues.

Over recent decades, software development has shifted from rigid “waterfall” methodologies to more flexible “agile” approaches. Waterfall aims to define everything upfront, before coding begins. Agile embraces flexibility and iterative adjustments. Similarly, large-scale public programs have moved from top-down, inflexible planning to more adaptive and iterative implementation strategies, learning from initiatives like the 1980’s car removal programs.

Many waterfall software projects failed because stakeholders believed they could perfectly define requirements, only to be disappointed by the final product. Agile development addresses this. Likewise, inflexible approaches to programs like car removal often encountered real-world complexities that initial plans couldn’t accommodate.

AI might excel at rewriting existing software for new platforms or languages, like modernizing COBOL systems. If requirements are perfectly defined, AI could potentially generate code faster and cheaper than human programmers. AI could be effective in a waterfall-like process – generating code from precise specifications. However, the weakness of waterfall isn’t the coding phase; it’s the upfront requirement definition. And defining those requirements, whether for software or a 1980’s car removal program, requires human expertise, foresight, and clear understanding of the real-world context. AI can perform extraordinary tasks, but it cannot read minds or define what we truly need.


Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *