Artificial Intelligence (OCR A Level Computer Science)
Revision Note
Written by: Jamie Wood
Reviewed by: James Woodhouse
Artificial Intelligence
Artificial Intelligence (AI) is a field of computer science and engineering that aims to create intelligent machines that can perform tasks that typically require human intelligence
AI systems are designed to simulate human cognitive functions, such as learning, problem-solving, reasoning, perception, and decision-making, enabling them to analyse complex data, adapt to new situations, and improve their performance over time
Artificial Intelligence (AI) has transformative potential across numerous areas, but its adoption and integration into society raise several moral, social, ethical, and cultural considerations.
Moral Impacts
AI developers & researchers
AI developers and researchers face moral dilemmas related to the potential consequences of their creations such as issues of accountability, privacy, job displacement, and social inequality
They must ensure that AI systems are designed to prioritise ethical values, uphold human rights, and avoid biases that could lead to discrimination or harm
End users & consumers
Individuals who use AI-powered products and services may encounter moral concerns regarding their personal data privacy, security, and informed consent
Personal data can include sensitive information, such as health records or financial details. This raises moral questions about respecting users' privacy rights
The responsible use of AI technologies to ensure transparency and safeguard user interests is paramount
Users must be provided with clear, comprehensible information about how their data will be used, who will have access to it, and what the potential risks are. In many cases, the technical complexities and uncertainties of AI make this difficult
Social Impacts
Workforce
AI adoption may lead to changes in employment structures
Some jobs could be automated, leading to unemployment or a shift in job roles
Reskilling and upskilling initiatives are necessary to prepare the workforce for AI-driven transformations
Education & accessibility
The accessibility of AI technologies may create a digital divide, favouring individuals with more resources and access to technology
Ensuring equal access to AI education and resources will be crucial to prevent social disparities
Healthcare
AI applications in healthcare can improve diagnostics and treatment, but ethical considerations arise regarding patient data privacy and the role of AI in decision-making, especially in critical medical situations
AI systems, while often highly accurate, can make mistakes. An incorrect diagnosis or treatment recommendation in a critical situation can have severe consequences. Therefore, the reliability of AI systems is a significant ethical concern
Deciding the extent of human oversight and the final decision-making authority is a key ethical consideration. It's imperative to strike a balance between leveraging AI capabilities and retaining human judgment, particularly in critical situations
In the event of an adverse outcome based on an AI's decision, determining liability becomes complex. Is the AI system, its developers, or the treating physician responsible? Clear guidelines and regulations need to be established to address this concern
Ethical Impacts
Bias & fairness
AI algorithms may inadvertently show bias if trained on biased datasets, leading to discrimination against certain demographic groups
Ensuring fairness and equity in AI applications is a significant ethical challenge
Autonomous systems
Ethical dilemmas arise with autonomous AI systems like self-driving cars
Autonomous vehicles must be programmed to avoid accidents. However, in scenarios where an accident is unavoidable, how should the AI decide the course of action? This dilemma is often encapsulated in variations of the "trolley problem," (External link to TheGuardian.com) where a self-driving car must decide between two undesirable outcomes
Decisions regarding prioritising safety and potential harm to individuals or groups raise complex ethical questions
Data privacy & security
The collection and utilisation of vast amounts of data by AI systems raise ethical concerns about data privacy, consent, and the potential misuse of personal information
Cultural Impacts
Language & cultural representation
Language models and translation tools powered by AI have the potential to unintentionally uphold cultural biases or inaccurately represent specific languages and dialects, affecting cultural identity and heritage
Ethical AI in art & creativity
The use of AI in art and creative endeavours sparks debates about the authenticity and originality of AI-generated works and whether artistic merit should be given to AI systems
AI & cultural preservation
AI can play a role in preserving cultural heritage, such as digitising and restoring historical artefacts. However, cultural sensitivity and respect for indigenous knowledge must be considered
Case Study
AI Ethics
UNESCO produced the first-ever global standard on AI ethics – the ‘Recommendation on the Ethics of Artificial Intelligence’ in November 2021. All 193 Member States adopted this framework
Artists' work used in AI training data
Artists are increasingly concerned about their work being used without consent to train artificial intelligence (AI) systems, leading to potential copyright infringements
Artist Kelly McKernan found over 50 pieces of her artwork had been used this way
Several artists, including McKernan, have filed lawsuits against AI firms like Stability AI and DeviantArt, joining a growing trend of legal action against such companies
Getty Images also sued Stability AI for allegedly illegally copying and processing 12 million of its images
Artists are calling for more regulation and protection, with some using watermarks to track the usage of their images
The EU is proposing AI tools disclose any copyrighted material used in their training, and the UK is planning a global AI safety summit
Hollywood on strike
The use of artificial intelligence (AI) in Hollywood is causing concern among industry guilds, leading to strikes by SAG-AFTRA and the Writers Guild of America
AI has been used to digitally recreate actors and generate scripts, but its use is currently limited by computing power and training material
A key issue is whether creators should be paid when their work is used to train AI, including residuals for reused work
The guilds argue that studios aim to replace human actors and writers with AI, a claim the studios deny
The use of AI could limit opportunities for aspiring actors and new writers
The outcomes of pending AI-related lawsuits in the U.S. could have significant implications for all industries, not just Hollywood
AI in the judicial system
The COMPAS AI program is used in the judicial system to predict a defendant's potential for future criminal behaviour
The program uses undisclosed factors to generate a risk score, which is not argued in court
Critics argue that AI can't fully understand human behaviour nuances and may reflect human biases
They suggest a joint decision-making approach, where AI passes uncertain cases to humans
Last updated:
You've read 0 of your 5 free revision notes this week
Sign up now. It’s free!
Did this page help you?