# Who Will Control the AI Infrastructure of Higher Education? Dr Stuart Grey - Teesside, April 2026 ## Slide 1 - Who Will Control the AI Infrastructure of Higher Education? Good afternoon, everyone. In today's keynote, I'd like to talk about the shift from the question of how we should use AI to who controls the AI we use, because we've heard already, and we'll hear many more talks about pedagogy, assessment, and practice, but I think we have to take a step back and really think about governance, autonomy, and incentives when we are interacting with these large AI providers. ## Slide 2 - Dr Stuart Grey So why am I up here today talking to you all? Well there's two key reasons why I have a unique position with which to talk to AI infrastructure within higher education. ## Slide 3 - Distributed Agents for Autonomous Spacecraft The first reason here is my technical background. So I have a PhD in AI as applied to space systems. And this has given me a unique viewpoint on how this recent generative AI boom has rolled out and allows me to understand a little bit better, perhaps, the different failure modes, the strengths, the weaknesses, of the underlying technology. ## Slide 4 - Free Text Analysis with Student Voice AI The second reason is, as well as being a part-time senior lecturer at University of Glasgow, is I've used that technical background to found Student Voice AI, where we analyse free text comments for a wide range of UK institutions, including eight Russell Group institutions. So I have that balance of the technical knowledge and experience and the hard-won experience, as well as working with a number of UK universities on implementing these tools, with a view to improving the student experience. So that impact on the final user, right? The most important part of what we do in teaching and learning is focusing on the student and how these tools can be used to make that leap. ## Slide 5 - Transparency So, I'd like you to bear in mind three key concepts when I'm going through my talk today. And the first of those is transparency. And by transparency, I mean, do we have any visibility into what's happening? Can we audit what's happening? Can we explain what's happening? Do we have end-to-end visibility of the supply chain, of the data that goes into these models, and the data that comes out, and the decisions that come out, right? I think that's really important. That's something we want to be touching on throughout the talk. ## Slide 6 - Distortion The next concept is that of distortion, and there's two aspects to this. There's the one we're perhaps all familiar with, in that these models don't necessarily tell the truth. That's a given, putting it kindly. So we have to be careful where we're using them and what type of model we're using for which job we want to give it to, because these models are incredible. We are basically commoditizing some form of decision-making and intelligence here, which is a leap forward educationally, but we have to be really careful how it's used. The other aspect of distortion I'll lean into is our choice of platform and vendor will ultimately change and distort what we're able to do and how we're able to do it. So we're at a very key inflection point at the moment where any decisions we make will have an outsized impact in the possibilities of what we're able to do in the future, and we don't want to be sleepwalking into it. We can make decisions based on lots of evidence, right, and I won't argue any individual institutions, individual decision, but we shouldn't be doing it blind. ## Slide 7 - Extraction And the last concept is that of extraction. What value are we putting on the data we are generating, we are using to train these models, to evaluate these models? Are we giving it for free to large tech companies? What is being extracted by whom? Who are we paying for the privilege to use these models? And who is extracting the value from our work? And I will make, hopefully, a case that we should be in control of more of our destiny in terms of generating AI than just giving it wholesale to large tech companies. ## Slide 8 - The argument in higher education is no longer about whether AI tools are useful. It is about who decides what we can do with AI. So, there's a big discourse at the moment about what AI tools are good at or not, and you can pick holes in all sorts of things they do. They're not good at everything, but they are very good at a certain subset of items, and I think we'll find areas we're not currently using that will also be excellent at if we use the models correctly with the right kinds of data, but I'm not worried about what they're good at or bad at. What I'm worried about is who decides, who gets to decide who can use AI within an institution. What AI is used, what are the boundaries, right? We are research institutions as well. Are we able to push and make changes ourselves, or are we stuck with whatever is defined by a vendor? ## Slide 9 - From tool to infrastructure So, in this first section, I want to talk about that journey from, okay, these are useful tools that everyone started playing with a few years ago. How does it go from that to being part of the infrastructure that we could perhaps get trapped in, right? How we go from a nice tool we're playing with in a web page to vendor lock-in. ## Slide 10 - AI is moving from visible tools to invisible systems So we've seen that transition from okay you go to chat GPT, you copy and paste things in and out, it's a very separate thing and that was used very early on for teaching assessment, for support, administration, it's used across the board in research as well, but we've got to be really clear what that journey is because as we move from okay you can copy and paste, okay it becomes more integrated in the tool, there's a button, the co-pilot button in Word for instance, to that journey to as you're doing work automatically things are processed, scanned and notification sent, right, there's a clear trajectory there as these tools become more mature and we don't want to be blindly sleepwalking to that. ## Slide 11 - Student adoption is already mainstream So, I don't want to dwell on numbers too much, but student use is basically across the board. Now, there are cases of students not wanting to engage with it, which is absolutely fine for moral or philosophical reasons, and likewise with staff. But the usage is there, and it's growing. These tools are everywhere because they have real utility. Now, people are using them in the wrong way quite frequently, on the wrong thing, and using the wrong tool for the job, and all sorts of different things, right? But when they are well-aligned with a problem, they are excellent, and that is why this transition from tool to infrastructure is happening, because there is real utility there. So, people are building into things, so we have to be on it as that build-out happens to have some control and ownership of that process. ## Slide 12 - Universities need a viable system for AI governance, not just a policy And that's why we don't necessarily need more policies on who should do what with generative AI. We need more governance, which to me means more teeth, as in more structure. This is what we're doing, right? This is what we're checking, right? And as a systems engineer, as a background, I think a key theme running through this talk is the need for verification, validation. What are we doing? Why are we actually checking that this thing is useful, whatever use case it might be? Are we bringing intellectual rigor to this? And to do this, we need to be very coordinated across the institution, right? Different stakeholders in teaching and learning, in research, in admin, in procurement have to come together. We have to build the capability to be sure it's doing the right thing, which I don't believe we currently have. And we have to make sure it's all being steered by our fundamental mission as an institution to teach students, right? To carry out cutting edge research, right? And operate as a for-good public body. ## Slide 13 - The bubble So, we talked about the pervasive use of these tools, and this has been a very rapid, rapid expansion, and there's lots of talk of the AI bubble, right, in terms of tons and tons of money being put into AI. And the next section here is talking about, okay, so what are the implications of that for higher education, right? We're not investing in these companies. Most of them are private. We couldn't if we wanted to, and I wouldn't want to invest in them. Picking winners is very difficult, but there is definitely a bubble, or if you're being more kind, a boom of investment happening in these tools. So what does that mean for us? ## Slide 14 - Tech companies are building capacity that universities may later inherit Well, I'm actually very positive about this AI bubble. It might seem a bit strange. Now, there will be problems when the bubble inevitably bursts. There will be stock market crashes, et cetera. A large amount of a lot of stock market indices is based around these big tech companies, a lot of circular financing, for instance. But all that aside, when the bubble bursts, there will be some things left over. And we can look to history for examples of this, where there's, when you think of a bubble, people frequently think of tulips in the Netherlands. But that's actually a very unique example of no real utility being associated with the output of that bubble. There's many more through history examples of bubbles leaving behind real infrastructure. Now, there are lots of winners and losers, mostly losers during the actual bubble itself, but what's left is useful. Think of railroads. Lots of competing companies building railways, most of which don't exist anymore, but the railways exist, okay? Likewise with telephone networks or telegram networks. Lots of competition, lots of investment, only a few winners, but what's left is real infrastructure. So we're left with compute and tooling. We're left with open source models. Again, I don't have much time to go into that today, but what I use in my own work in student voice AI is a lot of the open source LLM models, which we can run on our own hardware. It means we're not sending the data anywhere else. We're not training anyone else's models. We're using our own models to have full control and full visibility of what we're doing. Also, we've got this opportunity. As this bubble expands, we're getting a lot of institutional learning. So if we're doing anything else, we should be keeping our heads up, looking at what is useful, what's not, be interested in other people's use case because as this bubble expands and eventually pops, we need to have that understanding of what is a good use for this, what should we keep once the companies go away, and what should we just disregard and was always useless. ## Slide 15 - The upstream build-out is vast, and highly concentrated So, I'd like to give you a very quick idea of the scale of the investment made by these very small number of companies. We've got the hyperscalers, who are like Microsoft, Amazon, Google, and we've also got the new AI Frontier Labs, so OpenAI, Anthropic. And all of this build-out is really concentrated on those companies. And we are talking in the hundreds of billions of annual investments. So, Google's, or Alphabet, the parent company, its investment for 2026 is about $180 billion. Meta is going big for it, too. And across the industry, we're talking on the order of $1 trillion of hardware investment. So, there's a ton of investment in new data centers. And what's happening is this new hardware is being built, but the old hardware still exists. It might not be just quite as fast or as efficient. So, as we go through this bubble, this build-out process, there will be a real hardware boon for users who are willing to, okay, we have the expertise to build our own models, to deploy our own models. We can get generations back hardware, and we could be in a really good position. ## Slide 16 - But infrastructure built in a frenzy can still produce dependency So that might have been a perhaps surprising start to that, okay, there's real benefit to the AI boom, but there also are real problems. Now again, I'm not going to dwell on externalities, which are there economically for when this bubble does burst inevitably, but talk about, okay, how do these tech companies, especially talking about open AI topic, how do companies like this normally work? And they normally, because they have all this investment, they're trying to grab market share as quickly as possible. And they are very generous, right? All these companies are serving these tokens, right? Serving these API models, these chatbots at a loss to try and get customers. And that generosity isn't just for generosity's sake, it's to get that customer base to then lock them in. And this means at this stage, but throughout the whole process, the priorities, the incentives are not aligned. They are just looking to get customers. They will do whatever it takes. They are not looking to produce the best product. They're not looking to give the best outcome for students. They're looking to get potential customers. And the prize here is that companies who can get a larger market share can set the terms, can basically capture the norms of the market, what was expected, what was possible, what do people understand as possible, whereas I think there's a much wider range of things we could do with these tools if we're free with them. But these companies want to obviously constrain that, and there's a real, real tension here between either consumer models or more enterprising models, and we're stuck in the middle, right? A consumer model is very friendly, have a nice chat. It's great if you're asking, okay, what are five things to do on my trip to Venice, right? It'll give you great answers. The enterprise models will be good for running through all the TPS reports in a SharePoint. But when it comes to using these models for teaching and learning, they're not going to focus on that. But the norms might be set as the only things it can do are those two, so we have to be really careful there. ## Slide 17 - Who sets the defaults? So who gets to set these defaults is a fundamental question and we could just sleepwalk into letting these companies do it, or we could do it. It's about taking control, getting the power to make the decisions, but power has to be taken, right? We have to be proactive. These companies are not just going to give us what we want because fundamentally there's a huge market for the commercial AI, a huge market for enterprise AI. The education market to actually do good pedagogy and have these tools help teach people is very small, so they're not going to concentrate on it. So it's up to us to decide to set that. We have to really take ownership and try and take control of that process. ## Slide 18 - A small number of firms increasingly control every layer With a small number of companies controlling this AI stack, if you would call it that, we're not picking from a free and open marketplace of interchangeable options. They're all very tightly integrated, and this is by design, and they want you to buy into their vision of, okay, this is all this set of models from one provider, there's different tooling structures from that provider, and this is very difficult to compete against because once they've locked in, you're using all their tools and products, right, they've got you. And we can't create our own version of that in entirety. It's not realistic because of the scale asymmetry in that, as I mentioned before, the amount of investment needed to actually run these models is huge, so we've got to try and take a different tack to decide, okay, who's going to make the decisions for us, who takes control, who has power in this relationship with these AI providers. ## Slide 19 - The real question is whose goals the technology serves So, the first question we have to try and tackle is what is the purpose of these models? We have to bring our institutional purpose to bear, right? Because that fundamental purpose, that direction is super important because currently the models are designed for either consumer use or more enterprise business use. We have to say our models need to be our approaches, our systems, our tools have to be able to fulfill our desires as a university. And there will be tension there with those providers and we have to accept that and we have to maybe put up some fights and say no to some things, right? I think that's very clear because if you want to keep student interests, academic interests, research, admin, what makes universities special, we have to put up a fight and say we're not using all these tools just for everything in our job. We have to take a different approach. ## Slide 20 - Procurement teams set the agenda And we have to be really clear about how these decisions are made, because again, having been in university for many years and also selling to universities through student voice AI, these decisions are frequently made in procurement. So if we are looking to make decisions about what we want to buy for, do we want an across-the-board tool we're going to use everywhere, or are we going to pick and choose? I would strongly urge for the latter. We have to talk to colleagues in procurement and make sure we're all on the same page, because otherwise they, with the best intentions, will try to get the best deal from vendors based on their criteria we give them. And if we give them duff criteria, we are snookered, because then we get big contracts, it becomes a default or university, and very hard to break out of. These contracts are long, and it's far more likely the contract's just extended, because everyone's built on this one sweat of very vertically integrated tools. So we have to be really careful at this stage with our procurement colleagues, and give them the best deal possible to help procure what we need from these companies, but also perhaps build what we need to build ourselves. ## Slide 21 - If universities buy into a set of tools on vendor terms, the infrastructure is decided before it is debated. And again, just to reiterate that point, if we make these decisions about what infrastructure, if we are doing these things on the vendor's terms, we're basically admitting defeat. Because then that underlying substrate, that infrastructure, the AI infrastructure is decided before we get a chance to even debate internally what is it good for, what is it not good for, what should we build, what should we buy. ## Slide 22 - What dependence looks like So why would this be a bad thing? Why is doing these things on the vendor terms so, so bad? What's so wrong with being dependent on one of these AI labs models? Again, they're excellent. So what's the problem? You know, ChatGPT and Claude can do incredible things. So what's the problem with buying into one of these stacks wholeheartedly? ## Slide 23 - The crucial divide is between augmentation and substitution Well, it depends what you're trying to do. And they have, again, very different incentives around what they're trying to encourage you to do. But fundamentally, there's two broad classes of the type of work AI could do. And that's augmentation, helping us do what we want to do, helping us ideate, helping us clean up our inbox, organize the files on our desktop, whatever it might be. And then there's substitution, which is, OK, a job role that was there is now being replaced by AI. So rather than having a discussion with a student, it's an AI chatbot, things like that. And they are very, very different outcomes, right? And what we have to do is we have to be very careful with what we're deciding to augment. Some things don't need any augmentation at all. Some things could really benefit from it. And some things could be substituted. Some things is work no one wants to do. But we have to look at this through an institutional lens, not through the vendor lens of, OK, yeah, in a company, this could be automated. Well, perhaps we don't want to do that, because, again, we have to understand what our value is as a university and lean into that. ## Slide 24 - The easiest uses to scale are often the least educationally valuable And this is a problem because what is easy to scale and looks good in a demo is frequently the least educationally valuable, right? We could automate the whole thing, right? We could get ChatGPT or Claude to do end-to-end teaching to students, and it would be absolutely crap, right? I think we can all agree that. So where we have this institutional test of what's the educational value, we have to be really careful of, okay, what are these tools actually good for, right? And we're not just racing to the end. We're not speed running education. We're trying to look at where is a value in education, where's the value we bring, and where do these tools augment that rather than substitute it. ## Slide 25 - Pedagogy starts to bend around what the platform can measure Because if we really start to use these tools across the board, the pedagogy bends around what the platform can do. Now, you see some example of this in say when you implement a VLE, Moodle or Blackboard or it might be Canvas, that when there's a set number of things you can do, those becomes the things you can do. It's a tautology, but it makes sense. Perhaps before VLE is lots more variation of what people were doing. Now, VLEs offer tremendous value. I'm not saying we don't use VLEs, but the same thing applies. If we're using a consumer or enterprise grade business tool to do educational tasks, the scope of what it's good at and what it's able to do will not be aligned with what we wanted to do. ## Slide 26 - Priceless educational data will be captured unless it is governed So my last bit on dependence is a real bugbear of mine, right? We are sitting on a goldmine of data, of educational, pedagogic data. We have all the assessment definitions, the rubrics, the submissions themselves, and millions of students' submissions over the years, and we are just giving this away. But actually worse than that, we're paying for the privilege of giving it to a company to basically ignore it at best, right? And what we can be doing is so much more with that data. Rather than giving it to an upstream provider, we should be treating it as a governed asset that we control, that we use, however we want to use it, to our ends, our institutional, philosophical ends as higher education. ## Slide 27 - The lifecycle of capture So looking back at history, how will we find ourselves when we depend on these companies? Because when hopefully when I put it like we are paying for the privilege of giving away our big assets, you know we don't want to do that. So how come we get caught in these loops of being captured and dependent on these technology providers? ## Slide 28 - Convenience becomes dependence in stages Well, again, it's back to my previous point, again, related to the boom. They are incredibly generous, right? And capture with technology firms happens with considerable generosity, right? But it's not always coming to an end eventually. And you know, they are selling at a loss to capture market share. And then the squeeze starts, the prices start to rise, by which point you are committed, and you've got no other options, and then you are locked in. And this has happened throughout the history of technology. And the more modern internet platforms is just a modus operandi. And I think it's really important to see that these things are great, but we have to try and avoid being locked into one platform or one vendor. ## Slide 29 - Path dependence becomes governance And this can happen quite slowly and that I'm really noticing is that norms become standards and they harden and institutional investment compounds. So, okay, we have a big contract with OpenAI. We've invested lots of training to use their specific tools, their specific way of doing things, right? At which point, okay, we're committed and our options narrow. ## Slide 30 - Prefer modular architecture and plural ecosystems So what should we do instead? Instead, we should prefer a modular design. We should not be committing to one provider. We should have the option to move between providers, to have our own models, right? Give ourselves that optionality, right? The more different types of models, different sizes of models we are exposed to, the better because it keeps our options open and keeps people understanding that, okay, we don't have to buy this in from one of the Frontier Labs. Instead, we can build aspects ourselves, right? We can use these tools. We can use the hardware, the open source models, the expertise we have in the university, right? To actually build what we need. ## Slide 31 - A different strategy Okay, so how do we go about this? So hopefully I've laid out the key issues with what's happening with the AI boom, what happens with capture by these tech companies, what the options are around sleepwalking into these issues, so what could we do differently? ## Slide 32 - The strongest response is neither refusal nor surrender So, for me, the strongest response isn't pure refusal to use these things, and it's also not just pure surrender to a vendor, there's somewhere in between. I do believe that these Frontier Labs can really help us, right? The tools they sell us can help us. So we do buy some things. It will make more sense to buy where it's useful for our workflow, where there's an overlap between our use case and the consumer and the business slash enterprise use case. We should use those tools, right? That makes sense. But there's a large section where the use case is very different. We want control and ownership, and we want to set the direction aligned with our university's principles, and that's where we should build or host our own local models selectively. Now, we're not talking about training frontier models here, billions of dollars. These open source models exist. I've got them running on this laptop now, right? You can play with them after. Everyone wants to have a chat. I can give you some tips of how to get started on this. It's not difficult. The models exist, and we can very quickly build our tools, because in universities, we have an incredible asset at our disposal, and I'll reiterate this a few times, because we'll have access to the hardware, we have access to open source models, we have access to expertise. I'm not just talking about computer scientists here, but social scientists, educational researchers, right? And we also have all of the data needed to help with this, which I'll touch on a little bit later. So we have the ability to do these things. Also, finally, I think it's really important that we have the option to refuse to use AI in certain cases. Again, we don't have to use it across the board. Anyone pushing AI across everything is frankly being a bit silly, because there are aspects where not using AI is actually better, especially when it comes to educational outcomes, student experience, things like that. We have to be very selective, and you can really put people off by automating away any human contact. ## Slide 33 - Start from public-interest goals, not adoption goals So, we should really start from a position of, okay, how do we decide what to buy, what to refuse, and what to build ourselves? Think about, okay, what are the human elements we want to maintain? What are the key aspects for us, our principles in what we want to maintain? Now, for me, throughout my career, it's been about inclusion and accessibility, and these AI tools can really help with that if designed and deployed in the right way, right? So, we think about our purpose, our educational purpose, and we lean into that, and that is our starting point, right? That is our lens through which we decide to buy or not refuse or build ourselves. ## Slide 34 - Make evaluation a core institutional capability So here comes a key point in the talk, where I talked before about the assets capabilities we have. So we have access to this post-boom hardware. We have access to these open source LLM models. We have access to expertise within the university. I have access to the data. Now, it's those last two combined that we need to really think about, because we want to make evaluation a core institutional capability. The ability to say, is this model any good or not for our use case? Then we can decide, are we buying from the consumer way, from the enterprise way? Are we building? How are we testing? How good those things are? We're using lots of different models. How do we go about saying which ones are best? Evaluation should become a key, key capability, and it's something we don't do much of at all. Now, evals, as they're known in the sector, are really common. That's how these models are made, right? These models are made better by being able to test them and improve them over time, and it's something we just don't do, but we have everything needed to build that capability, so it should become a core competence of universities to be able to evaluate these models against our requirements. ## Slide 35 - Sector collaboration and public alternatives matter Now, we don't have to do all that evaluation ourselves. So something I've learned from student voice AI and working with many institutions across the UK is that people are willing to share anonymous data in order to improve the outcomes for the university. So we can do shared evaluation. This helps us making those build, buy, or refuse decisions, which should be really quite granular. I don't think we should be buying into one vendor or refusing all vendors. Because I think we have to be deciding on a case-by-case basis what to do in a joined up way. But evaluation helps with those decisions of build, buy, or refuse. Also helps us when these models will change. Part of that problem of that capture, that squeeze, the initial generosity, then things get squeezed. Frequently, they'll bump you down to a cheaper model, and you see this all the time if you use these tools a lot. There are regressions. They'll say, OK, this is still the model, but they've made it more efficient, they've made it smaller, with some performance loss. If we have a large shared evaluation data set, we can know when that happens and go, no, no, we're not doing that, we're doing this other option, or we use a different model. We have visibility, and it brings some rigor to the whole thing. And that's on us, right? But we have to have that evaluation capability, and we map that to our principles. And we can group these things nationally, by regional groups, or by mission, or size, whatever it might be. All the groupings need to happen, the shared evaluation, but I really think that's something really positive to aim for. ## Slide 36 - Ownership means owning some of the stack So we have to take ownership and responsibility for our decisions and which AI providers we're using. Again, in that build, buy and refuse, we have to decide what compute to use where, which cloud providers to use. We have to decide which, hopefully, university-specific models to use or develop or build collectively. We have to continually evaluate this process. This isn't going to be a choice we can make. It's going to be much harder than saying, okay, we go with open AI. Thank you very much. Instead, we need to have that rigor and build that evaluation set and use that to define, okay, we have this evaluation data set of everything we've done before, everything we want to do, right? And it inherently encodes all of our institutional priorities because it's what we've been doing, right? Use that to evaluate these models. ## Slide 37 - The alternative to passive dependence is not heroic self-sufficiency. It is practical stewardship. So, what I'm not saying here is that we have to do everything ourselves. It's a common failure mode across universities to want to do everything ourselves, do everything in-house, where there should be real benefit to outsourcing some things, right? So, if you need a tool to summarize an email, I'm not sure why you would, but perhaps some people want that, give that to one of these models, absolutely fine. But when we're looking at key functions as mapped to our principles as an institution, right, education being one of them, research being the other one, we have to be really careful what we're doing. So, we don't want to be entirely self-sufficient and sort of cutting off our nose to spite our face. We also don't want to be passively dependent on these Silicon Valley companies. Instead, we want to be practical stewards of our data, of our models, and decide what we want to build, what we're looking to buy, and what we're looking to refuse, because again, that third option is there, and we have to do that on a case-by-case basis. There should be some of each, right? And currently, we're vacillating between buying everything, refusing everything, and also not building anything, when I think there's a far healthier mix in there aligned with our principles as an institution. ## Slide 38 - What the next decade looks like So to wrap this up, what might the next decade look like? Now, I'm not going to talk technically and things like that, although I'll happily pontificate over a beer later on if anyone wants to discuss where these models are going. But it's, I want to give you some options of, okay, what's a positive scenario and what's a potential negative scenario? What might this look like, right? Try and give you an idea. ## Slide 39 - If universities play this moment well So, if we play this moment well, right, post-bubble, right, there'll be cheap infrastructure, right, that we can utilize to build our own models, to do our own thing. That becomes a real option. It's possible now. It becomes more and more possible as we go on. We can build these systems aligned with students, a human-centered pedagogy, as in teaching and learning needs very specific models. We can build them. We can decide what then. We have the expertise in universities. We have all the data we need. We can build a really robust evaluation so we know if it's actually working or not. Again, academic rigor, right? We bring that to the table. And what this could lead to is we could be an example of, okay, a set of public bodies that work together to build their own AI tools. And this could be a real example, right, across the public sector and beyond. ## Slide 40 - If universities drift into platform dependency On the other side, if we drift into platform dependency, if we don't give our procurement teams the right steer, and they strike a really great deal from their point of view with one of these providers, because we haven't given them the information they need, then we'll be stuck in an extractive process where we'll be paying through the nose for less and less, because again, that's how these companies work. They've got to pay back that investment. The provision will be limited. Again, we'll be limited in what we can do, because again, we'll be using consumer models or enterprise type models to do these things. There'll be no governance. There'll be no transparency around what these models are doing, what they're designed to do, what they are optimized to do. We would have that if we did our own thing in those key cases. ## Slide 41 - The next few years require three decisions from leaders So, the next few years require three key decisions from leaders in universities, and I'd say everyone in this room is a leader in the university. I'm not just talking about the higher-ups. We can all make decisions and we can all push for the right decisions to be made, and those decisions are what must we absolutely keep control over, right? What can't we be giving away? What should we build together, right? What is feasible? What could we start with? Right? Some real interesting questions there. And when we're building these things, what rights do students have? A key aspect that I've not had time to touch on today is that if we build these models ourselves, we can bake in student rights, right? Visibility, transparency in what's happening, right? Auditability, right? Because we have all that evaluation data. So, we can build these models, right, to do what we want to do, but we have to decide how to do it up front. ## Slide 42 - Universities are not only deciding whether to use AI. They are helping fix the defaults of higher education for years to come. So to summarize what the next decade might look like, so we aren't just deciding whether to use AI or not, we are deciding on the default of how education will work for the next 10 years. Because this is happening, it's happened, and we have to be aware of that, we have to accept that. So it's not a sort of philosophical or AI good or AI bad question, it's instead how do we want to be educating students over the next decade, and that's how important these decisions are. ## Slide 43 - We have all the pieces to build and control our own AI systems, so what are we going to build? (no narration on this slide) ## Slide 44 - further-reading (no narration on this slide)