Ethics on Chip

Ethics on Chip – to be read like the ‘System on Chip’ (SoC) in electronics.

What’s it?

An electronic, hardware-level element/module that sits on the circuitry of electronic devices – smartphones, computers – which ensures that those devices remain protected from being used unethically.

For example, in the case of training machine language models, and running recommendation, prediction, suggestion, decision models.

Think about Ethics on Chip (EoC) along the lines of Trusted Platform Module (TPM) by the Trusted Computing Group, and Application-specific integrated cirtuits (ASICs). It’s like an ASIC for ethics – an integrated circuit to embed ethics into electronic devices.

This idea/thought emerged while engaging with the topic of ethical AI and trustworthy AI, and the broader discussions about AI and society. It is also an example of Applied Public Interest Technology.

Computers are not microwaves, and humans are not the weakest links in cybersecurity.

Computers are not like bread toasters or microwave ovens. They are like children, and more specifically, like toddlers.

Like how toddlers need to be protected from “stranger danger”, digital systems need protection from “cyber stranger danger”. And both of them – toddlers and digital systems – need us for their protection, to watch out for them.

Like how toddlers are prone to physical injury from tumbles and falls, computers too are prone to technical accidents – data loss due to storage corruption, hardware failure and such. But there is also the other type of danger – stranger danger. Toddlers need to be protected from those with malicious intents, anti-social elements, and exposure to age-inappropriate situations. Following suit, digital systems need to be protected from cyberattackers operating with various intentions. While lollies, ice creams, toys, stories and impersonating a relative constitute some props and techniques used against toddlers, phishing emails and malicious-intention advertisements are used against the human users of digital systems. The goal of such emails and ads is to get the human user to run bad code on their machine or divulge sensitive account credentials or data.

Digital systems are not like independently capable adults. They are like toddlers. Like child protection laws, child safety and welfare policies, technical solutions and controls provide one arm/type of defence for digital systems. What’s the second arm of defence and protection? How do we manage to keep toddlers safe from stranger danger and other such dangers? How do we keep them safe, generally? By practising watchfulness. Security Education, Training, and Awareness is the cyber equivalent of that watchfulness.

But, such SETA efforts are often reported to be ineffective, and those who are the receivers of such training often find it painful, annoying, boring, and a nuisance. There is one single and simple thing that could help solve the effectiveness problem of security education, training, and awareness efforts. It is to acknowledge the fact that digital computer-based systems are inherently vulnerable systems. They are easily accessible and modifiable code-execution machines. They can run good code and bad code with equal ease.

Digital systems carry a wide variety of inherent vulnerabilities. They are not the all-knowing, all-powerful, perfect machines that they are typically portrayed to be. They require lots of care and attention to be kept safe. In the case of human development, through the process of maturation, adults develop the ability to keep themselves safe, make informed decisions, manage risks, and respond to situations with the faculties of intelligence, memory, perception etc. Digital computer systems have not yet reached such a state of maturity. Since their programmable surface is large and easily accessible, they remain highly exposed to attacks.

Continue reading “Computers are not microwaves, and humans are not the weakest links in cybersecurity.”

Identity Marker Upgrade: from (Technical) Generalist to Public-interest Technologist

Discovering new and various identity markers is something I have found useful. When you are interested in many things/areas, it’s sometimes difficult to answer the regular questions – what do you do, what are you interested in.

The broadest catch-all phrase that has been useful is “generalist”. I then made it “technical generalist” to add some detail. Recently, I read Speaking Tech to Power: Why technologists and policymakers need to work together, in which I discovered a perfect upgrade to my identity markers: public-interest technologist. Someone “working at the intersection of security, technology, and people” as detailed by the post’s author, Bruce Schneier.

And it makes perfect sense to me. Time to read more into it.

When Public Good becomes the Profit

The profit motive energizes businesses and industry to innovate with information systems.

Public Good as the profit motive should energize governments and public administrations to innovate with information systems.

It should become possible and commonplace to consider increase in Public Good as Profit.

It’s not about simply “digitizing” or digitally transforming existing public administration policies and workflows as-is. Instead, it’s a potent opportunity to review public administration and organization from the ground up.

At this time, businesses are facing fundamental pressures to re-invent itself as a non-destructive juggernaut. The public realm actually has a chance to lead this turnaround from extractive, unjust practices in the name of profit to economics of care, justice, and sustainability. The power of information systems is waiting to be deployed for this cause.