Skip to main content

AI: What Higher Education Needs to Know

  • 4 Min Read

Artificial intelligence (AI) can be exciting, daunting and concerning, all at the same time. It has the potential to bring opportunities to higher education, but educators and commentators have also raised concerns. Amongst all the chatter and conjecture, how do you separate the hype from the reality, the fact from the fiction?

Lisa Elliott
topics

Last year, UK universities showed they recognised the challenges. A cohort of 24 of them got together and drew up guiding principles on generative AI—that being AI like ChatGPT that can create new content from large data inputs. The principles outline five commitments including supporting staff and students in becoming AI literate and equipping them to use generative AI tools appropriately, while also using AI ethically, upholding academic integrity, and sharing and promoting best practices.

This gives us an insight into how important it is to fully understand AI, adopt responsible AI practices, and have strategies for introducing it.

Benefits of AI in higher education

As an educator, AI can help you in a range of ways. It can be a timesaver by automating some administrative tasks. It can help you create lesson plans and even suggest and create content. It can also help you tailor learning materials to meet all students’ requirements, such as for those individuals who might have unique educational needs.

AI also creates opportunities around data because it can analyse large data sets to provide insights into student progress, for example. When you can access the right data in the right way, you can identify students who may exhibit signs of struggle and take action to address the issues.

AI concerns in education

Despite those benefits, since AI in education is still at a formative stage, it is understandable if you have concerns about how you and your students might use it. How much could you end up relying on AI-generated content? Will it always be correct? Might we come to over-rely on technology in the future?

Another potential issue is bias, which can originate in the data sets AI is trained on, be baked into the algorithm itself, or be compounded by the results. This is why higher education institutions need to carefully consider the risks and responsibilities that come with collecting and using data from staff, students and applicants in their systems and data sets that inform AI tools.

Ethics, governance and regulation of AI

It’s clear that AI technologies are advancing quickly, and businesses and educational institutions are beginning to incorporate them into their operations and activities. In response, governments and governing bodies are formulating their strategies and plans for safe AI development.

The inaugural AI Safety Summit, which took place in the UK’s Bletchley Park last year, resulted in the Bletchley Declaration on AI safety. It was signed by 28 nations and acknowledged the opportunities and risks of AI, as well as the need to collaborate to better explore and understand them. Prior to the summit, in the U.S. the Biden administration issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.   

Yet despite steps like those, a recent report from the UK’s Open Innovation Team and Department for Education revealed that 23% of schoolteachers are concerned about the risks of using generative AI tools. The report also noted that teachers didn’t get information about generative AI from formal workplace training, and therefore, training for educators could increase their appetite to use it.

Training and knowledge exchange can make a real difference in how people react to technology. The unknown can be daunting. It is important to support educators to help them understand AI and how they can use it to their benefit whilst minimising risks.

One “knowledge vacuum” that may need plugging concerns AI ethics, governance and regulation. AI ethics has been defined as, “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”

In a recent survey of UK teachers, more than half (54%) said students should be taught the ethical implications of AI.

With this in mind, D2L launched a free AI course, powered by D2L Brightspace, to help anyone looking to use AI technologies learn about:

  • Responsible AI practices
  • How to mitigate biases and uphold transparency
  • How to comply with some of the emerging AI laws

The training also contains practical guidance to help you get started on your AI governance strategy. This includes how to assess your AI maturity across the three pillars of people, process and technology. The training then outlines eight governance mechanisms for AI in your organisation. The first is to create acceptable use policies, while subsequent ones cover training, monitoring, and testing AI systems for bias.

Learn more about AI in higher education

It is likely you already use technology such as a learning platform to help administrate, report on and deliver learning. AI can offer ways to optimise teaching and learning experience, but educators need the knowledge to help use the tools and confidence to engage with new ways of learning.

To learn more about AI, explore D2L’s resources, including the free AI ethics and governance course. You can also find out how D2L can help you create personalised learning at scale at your university with Brightspace, our learning platform for higher education.

Written by:

Lisa Elliott

Stay in the know

Educators and training pros get our insights, tips, and best practices delivered monthly

Table of Contents

  1. Benefits of AI in higher education
  2. AI concerns in education
  3. Ethics, governance and regulation of AI
  4. Learn more about AI in higher education