Episode 054: Adam Diamant

Who should get the next MRI scan?

In this episode of Podcast or Perish, I talk with Dr. Adam Diamant of the Schulich School of Business, York University, who studies waiting lists using complex models and artificial intelligence. We discuss the ethics of waiting lists for scarce resources, such as MRI scans and life-saving surgeries. Adam says we can improve on the “first come, first served” model that we often default to.

Transcript

Cameron: Everyone knows what a traffic jam is. Streets that can handle hundreds of cars per hour don't just slow down when they get too busy, they grind to a complete halt. Now, imagine we're talking about the waiting list for an MRI. If everyone could just get in line and move through the system at a steady pace, things would be okay.

But MRI patients are not all the same. Some are life and death. And MRI machines can break causing chaos for patients and staff. What is the best way for a hospital to deal with long waiting lists for expensive equipment? Adam Diamant studies these questions using sophisticated data models and ai. If you want to learn about cutting edge quantitative analysis, he's the guy to talk to.

So I'm going to right now.

This is Podcast or Perish, a podcast about academic research and why it matters. My name is Cameron Graham, Professor of Accounting at the Schulich School of Business in Toronto.

Adam, welcome to the podcast.

Adam: Hello. Thanks for having me here.

Cameron: Can you please introduce yourself?

Adam: My name is Adam Diamant I've been a professor at the Schulich School of Business for roughly 10 years now, and I study optimization, AI tools, machine learning, a variety of empirical methods.

And primarily I like to apply those tools to gain insight into the management of healthcare systems. I try to ask questions like, how can I improve the operation of such a system? What lessons can we learn that might generalize to other systems and using these different tools, I try to answer those questions in maybe a rigorous and very precise way.

Cameron: So your academic field of study is operations management and you're applying it to the healthcare sector, right?

Adam: Exactly.

Cameron: And then you're choosing a quantitative focus using data modeling, and you're getting into AI and all that sort of stuff, right? Yeah. So what, what drew you to that combination of things?

Adam: I feel like I've always been quantitatively minded, I guess you could say. I remember in Grade 3 We would get these math sheets and I wanted to obviously get the questions right, but I wanted to do it as fast as I could. So there's a sort of competitive aspect to it, but I kind of really liked it and that, and then I went to high school. And in high school I really felt that I was more suited. You know, there's math for the sake of proofs and looking at beautiful structure. I really liked the way it explained real world systems. I was really drawn to physics, actually, and that's what I studied in university physics and computer science.

And in particular, I was really drawn to atmospheric physics. Why? Because it involved technology, these global climate simulation models, but also it involved math and science that you lived. So you experienced the science by, you know, going outdoors and it's snowing or there's a windy day and in order to sort of explain that phenomenon, you would use math and you'd use computational tools.

That's sort of the genesis. For some reason, I was really drawn to that and the healthcare space, the healthcare operations space, I feel like it gives you the same kind of experience where you need to precisely describe what you're experiencing in an MRI wait list or your interactions with your GP.

But you need to do this precisely and rigorously, and that's where the empirical tools come in.

Cameron: Mm-hmm. Now when you're going about all this stuff, who do you envision as your primary audience? Some academics, they are writing to other academics, that's their audience. Others are intensely practice focused.

Adam: Right.

Cameron: Others are like very much about teaching. How do you see yourself?

Adam: I feel, look, there are two sides of this coin, I think. On the one hand you want to publish papers. And to do so, you need to make sure that you vigorously and very, I guess, academically answer a particular research question.

And all of that goes into a paper that you then publish, hopefully in the top journal, right? So that's one side. But on the other side, I try my best to, I guess, work on projects and start collaborations with people who are in industry, who are in the hospitals. And so, invariably, the result of these research projects ends up being a tool that they can use, or a couple of presentations that educate people on, you know, maybe they could think about this versus this.

So the hope is that it somehow gets into the hospital or an outpatient care facility.

Cameron: Mm-hmm.

Adam: I just, there's no guarantee that after such a long research process that is going to be the case.

Cameron: No, no, but your goal is to do more than simply describe the problem.

Adam: Well, you want to make an impact, I guess. I don't want to just speak to the academics out there and say, Hey, like, look at this, because it's a little bit, in maybe my perspective, false. It's like you study a real world system, you articulate the potential downfalls or drawbacks of it, you propose solutions, you rigorously test them and experiment with potentially new ways of doing things, and then you don't sort of circle back and talk to the people whom you got the problem and potentially the data from? It seems a little bit false. So I would love it if all of my ideas ended up somewhere in practice that was helping people. It's just the nature of how things go, that that's not always the case.

Cameron: So is there any aspect of your research that you would consider to be field-based research?

Adam: At the start of my career, not as much. I would get these secondary data sources from sometimes places that I didn't really know much about and then do some analysis on that.

But as I've become sort of more senior and I've embarked on my own research projects, yes, it's primarily field-based, insofar as I'm liaising with hospital administrators or doctors, and we are agreeing on, you know, what exactly we need to address this problem more thoroughly. So that could be secondary data sources. Sometimes it could be primary data sources, collecting surveys from patients.

Uh, recently I've started doing more interviews even, which I've never really been trained on, but through collaborations with faculty members here at Schulich, it's a really important part of the process. So it really involves whatever data you kind of need to what I would say is rigorously address the problem at hand, and as long as you can utilize it in a non-trivial way, that's great.

Cameron: Yeah. Interviews are a very interesting way of trying to get at some underlying themes of what's going on in the workplace. I found that when I'm doing interviews as a professor from Schulich, I tend to get official answers from people. Like, there's not a lot of spontaneity.

Adam: Right.

Cameron: Whereas if your PhD student goes in and does the interview, it's just more relaxed and people talk about, you know, from a more personal perspective on the problem, and I find I can learn more that way.

Adam: I love interviews because talking to the people who are the front lines of this, whatever, you know, it's a healthcare process. Let's say they are the schedulers or the nurses or nutritionists or what have you. The people interacting with patients, they have stories, but I find a lot of the time that because I'm talking about the operation of a system, they seem to be more forthcoming.

And in particular, it's because I have a set of experiences from interacting with many of these systems. They have a set of experiences interacting with that particular system and then we can kind of commiserate a lot of the time, which is really great, 'cause sometimes you study things where, you know, you don't have that direct experience and so you're learning through the interviewee what they're supposedly going through.

So I just really like that in-person dialogue because it's a chance for us to get together and talk about problems. Plus I would say, you know, in the hospital setting or outpatient clinics, it's not as if there's sort of this fear of retribution that you might have if you go to a private industry and ask, you know, Hey, talk about your employer.

Cameron: Yeah. Do you ever get to talk to the patients at all?

Adam: Sometimes. You need to go through a lot of research ethics boards.

Cameron: Yeah. Oh, I imagine.

Adam: Definitely. Um, I have done this more through surveys in the past, just because you need a lot of data and many times patients don't want to sit with you for an hour and talk. They'd rather just fill in a survey.

Cameron: Definitely. Yeah. Yeah. So, you know, interweaving all these perspectives on the problem gives you a richer understanding of it. But eventually though, you're trying to look at this, not with qualitative analysis of the interviews, but quantitative data.

Adam: Definitely.

Cameron: So how do you make that shift then?

Adam: Well, there's one paper for example, where we use both. So at the start we have a rich secondary dataset, hundreds of thousands of patient interactions, and we kind of triangulated just exactly the phenomenon we're trying to, we're studying. Here's the problem, and we want to make sure that it's rigorously expressed and that people are convinced that it exists.

We also propose, let's say, a mechanism or two to say, Hey, look, not only does it exist, but maybe we can address it in the following ways. The discussions with, let's say hospital administrators then come into play because hey, they can attest to the problem.

They can assess whether the feasible solutions are indeed implementable.

They can provide not only feedback into the problem that they've experienced, but maybe how well the solution that you are proposing might work. So there's a variety of ways in which that relationship can work. But nowadays, even some interviews just about the experience. If you utilize AI agents and AI tools, you can now with AI detect themes that you might not have detected if you had done, let's say, a qualitative analysis by yourself. So even having this rich unstructured dataset, in this new AI age, is very useful.

Cameron: Okay. Yeah. I want to get to the AI later. I want to tackle some of the quantitative aspects that you're dealing with first to make sure I understand them. The MRI paper that we mentioned several times here is called "Managing Scarce MRI Capacity in Overloaded Queuing Systems."

And you were dealing with a massive overload, 'cause this was the situation in the healthcare system after the pandemic, when so many MRIs had been postponed, and they had 475,000 people in Canada waiting for an MRI. And the Canadian Medical Association, I'm taking this right from your paper, estimated that it would take 10 months of overtime to clear up the backlog, and MRI systems can't always operate at that capacity to clear it up. So a huge, huge problem caused by a major crisis in the healthcare system. So there's no question about whether this problem is real. Everybody can see that it's there. Right.

So I wanted to ask you, like in terms of quantifying this sort of stuff, what you're trying to get at is not just 475,000 identical cases and how can we juggle them to get them through the system? There's different degrees of urgency, which we've mentioned. They also use, um, in your paper, you describe part of the problem being about the stability of the system, right? That this is a complex system, not just a complicated one. And the interaction of things can lead to, as we said with the traffic jam, suddenly everything grinds to a halt.

Adam: Right.

Cameron: Right. So it's, it is not a problem that, you know, a simple tweak will necessarily lead to improvement. It could make things worse. So it's very, very complex. So how do you go about studying a problem like that? How do you break it down into things that you can begin to model?

Adam: Well, the first thing that we do is talk about the research questions here. And so for that particular paper, it's about, look, we are in a situation where we're just so overloaded, there's so many patients in the system. The issue is that the longer someone waits, the sicker they could potentially get. And at the time, and even currently, that's not really being fulsomely taken into account. And so our thought process was, is there a way to, to model this notion of a patient becoming sicker over time?

And furthermore. Is there also a way to account for the fact that not only is one patient getting sicker, but if there are a lot of patients who have the same diagnosis and they're waiting a really long time, we have a bigger and bigger problem in terms of the scale on our hands here. So is there any way to introduce novel measures, novel metrics to account for this. And then given we do account for this, how do we make decisions in terms of what patients we should prioritize at what time? That was sort of the start. The first research question was all about that.

Cameron: Mm-hmm.

Adam: And then we started to think about are there any other sort of operational interventions that would, again, we're, we're not going to solve the backlog problem by not expanding capacity, but hopefully manage it for a long enough time such that we can be prepared when capacity, new capacity comes online. And so we thought about not only just dynamically prioritizing different patients, but also how to group them together to ensure that there is some sort of notion of equity in the system.

Cameron: Mm-hmm.

Adam: It shouldn't be that you are going to one hospital and I go to another hospital and we have the same diagnosis, but you wait twice as long as me. If we could somehow pool the hospitals together, maybe we would get similar access to care.

And so we, these questions started to sort of snowball and once you want to rigorously address them, well then the power of math comes in. Then we have to obviously test the effect of our math and so we parameterize these simulation models with the rich data sets we have and try to learn from the kind of policies that we're, we're, uh, recommending. And so that's how this came about. You try to do this as rigorously as possible with the goal of answering, or telling a story and answering a question. And I think by the end of it, we told a decent story.

Cameron: So basically what you're doing is you're talking to people, trying to figure out what the problem is, trying to model it in your head, how is this actually working? And you're developing a model then, which you can go back and look at the history of data and see what would've happened if we'd been able to apply the resources in a different way using this model.

Adam: Great. Great point. By talking to people, we have research questions that are, you know, inwards, but then you formalize those research questions from the words to the math. And then, with the math, then you start having to ask questions about, there's tractability, there's properties of these models that you try to prove and show.

And then at some point you try to use those models to make recommendations in the real world, uh, right? And in the best case, you know, we might not be able to apply those exact policies in the real world, but first and foremost, we create a test bed, a simulation environment where those policies can be at least tested before implementation.

And then you try to see how well you, how well they perform. And that's what the crux of the paper talks about. We propose three operational interventions, and we can make sure patients don't wait too long, especially ones that have waited a very long time.

But unless new capacity becomes available, you're never gonna have a patient just walk in and get an MRI.

Cameron: So, just to dumb it down for me, explain to me what it is about this complex phenomenon that means you can't simply say, well, we've got these MRI machines, let's do all the most urgent patients first, until they've all got their MRIs, then we'll turn to the second tier of patients and do all of them. And if then if we've got time left over, we'll do the third tier. Like why can't you do it in that simple mechanical way?

Adam: If you had enough capacity, you could, but we don't. And so let's say you only prioritize more urgent patients. Less urgent patients, they'd all be waiting. At some point, those less urgent patients become more and more urgent.

Cameron: Okay.

Adam: So now you have a problem because those less patients become more urgent, and then you're getting an influx of new urgent patients. Well, how do you balance all of those urgency levels in a way that patients who are not so sick don't become sicker and the urgent patients get the appropriate care that they need.

If you don't have enough capacity, it's a hard problem to solve.

Cameron: Yeah. Yeah. I can understand it now. It's that, you know, it's the fact that these patients are not static pieces of data. They are human beings and their situations are changing. If they're on the MRI list, chances are they've got a condition that is gonna get worse over time. And then you've got quality of life issues.

Adam: Exactly

Cameron: Right.

Adam: And currently when you're diagnosed with some sort of issue and you are placed on the MRI list, you're given a static priority level. And that might not change. And so if you're, let's say not a patient who's been diagnosed with some non-urgent condition, if you wait for a year and a half. The chances are that non-urgent condition is gonna be, well, a lot more urgent.

Cameron: Adam, let's turn to the second paper that you sent me to have a look at. This is called "Learning to Optimize Contextually Constrained Problems for Real Time Decision Generation."

So this is a paper that appeared in Management Science. It's intriguing to me because it's about two different fields, right? You've got this quantitative research methodology, brought in from Operations Management. But you've also got this machine learning aspect, right? So, when you talk about machine learning, most people think ChatGPT right?

So that's the kind of machine learning we're talking about, the stuff that underlies those kinds of AI engines. Can you tell me first of all, like, who's on your team to bring those fields together?

Adam: Oh, I mean, I get to work with such brilliant folks. This paper in particular, Timothy Chan, Rafid Mahmood, Aaron Babier, they're all people from the University of Toronto.

And we have just sort of similar research interests. They have done healthcare operations research in the past. They're passionate about analytics and I was very lucky enough to be working on this paper with them. I've been also lucky enough to work with a variety of really amazing academics and hospital administrators and doctors.

So every problem typically involves some different analytic solution or empirical solution, and sometimes that breeds sort of new modeling, new paradigms for thinking about how to create algorithms or mathematical constructs. And this is one such paper where we had some empirical work in the past that we thought, Hey, look, this deserves much more rigorous treatment. And that is what happened.

You know?

Cameron: Yeah.

It's, uh, to me a very complicated paper because you're dealing with concepts that are a little bit foreign to me. So let's start with the basic one. first question I had when I started reading this was, what is an optimization problem?

Adam: Great.

Yeah. So we are talking about mathematical programs.

So basically these are models that there's an objective and you're trying to optimize for. So it's either maximizing this objective or minimizing the objective. And to give an example, in an organization, you want to minimize costs, right? You might want to maximize profit.

Now, you can't just unilaterally do that. You have constraints associated with those decisions. And so there are various levers you can pull. These are called decision variables, and they are restricted by, you know, the amount of resources you have. Or there might be relationships in your organization that you have to make sure hold.

These are constraints, and so you basically have a constrained optimization model. The goal is to choose a set of decisions or decision variables to optimize this mathematical model, which means to, again, push the objective as high or low as you can subject to these constraints. We in that paper say, Hey, how can you solve this problem using a machine learning methodology?

Cameron: Is that what's moving it past the MRI paper that we just talked about? Or is the actual optimization problem different from the MRI waiting list?

Adam: Yeah, it's a different problem. It's actually more general in the sense that it could be any such problem you're trying to create. The difference between...

Cameron: Solve, I hope, not create!

Adam: Solve! You're right!

So you, you're... any problem to solve. The difference here is in the paper that you've mentioned, you need to solve this quickly. Time is of the essence. And so you might for example, think of, Hey, I want to take a trip downtown right now. Right. Um, well, what's the fastest route to get there?

You don't have, or at least the platform that you're using, they can't wait five minutes to give you the answer. It needs to be done quite quickly. And there are definitely constraints. You know, some roads might be closed. You might have issues with traffic in certain routes. So there are restrictions even though your objective here is to get there as quickly as possible.

And so how do you basically solve this problem with that knowledge of all the restrictions in the fastest time possible?

Cameron: Right. You've got a, a particular word in there that's interesting to me. You're describing your focus as being on "continuous" optimization problems. What's that mean, "continuous"?

Adam: Good question. So when you're talking about, let's say, scheduling patients to MRIs, a patient is a discreet individual. You can't split them in two and have half of them do and have MRI, right.

Cameron: Half of them go to one hospital and half go to the other,

Adam: you know, or have each body part being taken into a different MRI room, that's not gonna happen.

Right. But furthermore, the MRI machine is one whole quantity, so you have to make sure that you're using all or none of it.

In this particular problem. We are looking at continuous, so it's like infinitely divisible constructs. You can think of water as that, right? I can have one liter or 1.01 liters, or 1.001 liters, et cetera, et cetera, and so many things decompose into that kind of problem setting.

And so what we study are these problems specifically because they have certain properties that we exploit, but also a lot of times they form good approximations to those discrete problems. So I said before, yeah, we want to solve, uh, a transportation problem where you want to say, what's the fastest way to get from here in downtown?

That might be actually too difficult to solve very, very quickly. And so our alternative is to sort of what are called relax the, restrict some of the restrictions and say, well, how can you get all of your body, even though they might go through different routes from here downtown, uh, as fast as possible.

Yeah, it might not be totally feasible in real life, but now we get a sense of, well, how hard is the problem? What are potential roots that might be considered? Um, and. It gives us a bound with which we can assess whether the true solution is close to.

Cameron: Okay. So you relax the constraints, look at the problem.

Adam: Exactly.

Cameron: And then reimpose the constraints to get something that's more feasible.

Adam: Exactly.

Cameron: Okay.

Adam: And quicker.

Cameron: So you gave a couple of examples in the paper. One is investment portfolio optimization. Right. If you're a, either an individual trader, or you're a fund manager, and you are trying to optimize the portfolio of investments that you've got.

Andrew: Mm-hmm.

Cameron: So that's one kind of a problem that is similar to what we're talking about in the paper. The other that you gave is personal cancer treatment.

Adam: Yeah. Radiation therapy treatment plan.

Cameron: Yeah. So tell me a little bit more about that one.

'Cause I understand portfolios of investments. Yeah. Tell me about personal cancer treatment.

Adam: Well. Typically what happens is that if someone is diagnosed with cancer and they need radiation therapy, there are these images, these contoured CT images that are provided. And there's a whole pipeline associated with, I have this contoured CT image but I want to create a treatment plan, a radiation therapy treatment plan. And so there's this mapping between the images and the final plan that is potentially deliverable by a radiation therapist or an oncologist. And this whole process is quite complex. And the reason for that is because while it begins as an optimization model, there are issues with taking that optimization model and implementing it in practice. And so an oncologist has to continuously review plans that are generated by any model to make sure it's feasible.

Cameron: Mm-hmm.

Adam: And so what we try to do here is we try to replace that pipeline, that iterative pipeline that might take days, if not weeks, into one single-shot pipeline that could potentially generate plans in.

You know, a few milliseconds. Why? Well, for the patient it's better. Right? Right. Instead of waiting several weeks, maybe even months for, you know, their treatment to begin now, well, they can start treatment the next day.

Cameron: Okay, so you're, you want, don't want to present to the oncologist all of the possible things that the AI thought of.

Adam: Exactly.

Cameron: So you've gotta constrain that down to the better ones.

Adam: Exactly.

Cameron: How do you begin to get to that?

Adam: That's great. Yeah. So. In the paper, we talk about this notion of feasibility versus optimality. So the feasibility aspect is, hey, we have constraints, uh, particular with the radiation therapy treatment planning one, we want to make sure we, we give radiation to the places that need it, but make sure that the other places that don't need it, they're, they're spared, right?

So these are the restrictions and there's various medical requirements that need to be satisfied here. And then we have. An optimal solution or optimization, right? We want to basically try as best as we can to target, um, the structures that need radiation. As with as much as we need, we want to get as close to as possible the, the recommended dose.

So in the paper we basically use machine learning to and, and rich training data to train a model on these feasibility requirements and then use that model to guide. Another model where we train, where we, we try to attempt to generate optimal plans. And that feasibility model kind of guides the generative model in what plans sort of make it out.

Cameron: Okay. How do you, like, what does it mean to train an AI?

Adam: In this particular project, uh, we gave it CT images, contoured CT images, and then implemented plans. Wow. And so we're having it learn this mapping between what people said they wanted and what they actually got.

Cameron: Okay.

Adam: In other settings, it could be very different.

For instance, we have a project that we are looking at where we're trying to train an AI agent to, to learn a mapping between what, how people read a particular narrative, what they feel about certain constructs, and a value between zero and one of those con constructs. You know, you and I could potentially lead it and maybe if we were very, very, uh, experienced, we could write down a number.

But if you have a, a neural network or, or people who don't know, how do you automatically generate that? And so what you can do is train a, a, a net neural network based on many, many people's assessment, personal assessment of what that number should be, and then it. Will spit out the, a parsimonious value depending on all of the data that you've collected.

Cameron: So it begins to imitate what it sees the, your, the people in your data set doing.

Adam: Exactly. And, and that's what the, I guess that's the core of machine learning right there.

Cameron: Yeah. Now, I've had some interesting experiences with AI, primarily ChatGPT. I used it to plan a trip to the uk

Adam: Fantastic.

Cameron: And I had, uh, a bunch of, uh, universities that I was visiting to give presentations to. And then I had a five day break before I got to my last one, and I wanted to visit cities that had cathedrals in them.

Andrew: Wow.

Cameron: Because I figured if it's got a cathedral, it'll have the kind of architecture that appeals to me.

So that's the constraint I gave it. I'm here, I want to end up there. So I was in Coventry and wanted to end up in Glasgow. Find me some cities with cathedrals that I can visit along the way and plan a rail trip. And it came back with, you know, stop number one was Lincoln. Stop number two was Bath, which isn't anywhere on the way from Coventry to Glasgow.

Adam: Right?

Cameron: So it just imagined that somehow I might want to make this massive day trip to a city in a completely opposite direction. You know, it's what people have described as the hallucination problem in AI. Right. That it just makes shit up, and says, Here you go.

Adam: Mm-hmm. It wants to please you.

Cameron: Yeah. Well, it didn't, right. It failed. It specifically failed on that goal. So do you have that kind of an issue when you're training the AI, that it'll just hallucinate and come up with a solution like we were talking about before? Well just cut the patient in two and send half here and half there.

Adam: Yeah. So let me first just differentiate one thing. So a lot of the work that I have done previously is, I think, more appropriately described as "machine learning."

So you're taking some model, let's say a neural network. It's really agnostic. It's never seen any data before and you're training it on a very specific problem.

And so the solution that it will generate is only really applicable to the very narrow problem that you're, that you have the data for.

Something that you're describing, this AI agent is trained on, well, it's like a world model. It's trained on a lot of data and it's sort of this utilitarian tool that can be applied to a variety of domains.

And I find that prompting helps. Getting better at prompting. So for instance, maybe instead of whatever prompt you wrote, you have to give it more guidance as to the path that you would love to take. You know? The issue I find is you don't always know what it's thinking, obviously. Right? And so sometimes it's not really clear what you've left out in a prompt to give you the suggestion that it made.

Cameron: Yeah, and other people have described that as a solution. And I think that's like kind of blaming the victim, frankly. So thanks a lot for that.

Adam: Yeah. Yeah. I mean, look, I am a big fan of AI in the sense of, I think it's a really good productivity to use. I use it during my research to help with productivity. But to produce research, it requires a lot of handholding. So in some sense, it's kind of cool because you get to use a novel technology. You might get insights that you would not get, had you not used the tool. But it's not like a set it and forget it type of thing. There's so much engineering that goes on behind the scenes in order to basically spit out something that's competent.

Cameron: Mm-hmm. Now the results of this paper, is it just a proof of concept? Or do you actually have a tool then that can be used by...

Adam: I mean, there is a tool, it hasn't been implemented anywhere. Um, so it, it is proof of concept insofar as, we have the technology, we have some examples where it works.

Whether people want to use it, it's more at this point on us, I guess if we want to create a tool or a company or commercialize it. It's just hard. It's hard to switch from a research kind of mentality.

Cameron: Yeah. What, what, what's your GoFundMe page?

Adam: Yeah. You know, I need a medical issue, first, to get that up and up running. Knock on wood, that doesn't happen, though.

Cameron: No, definitely not.

Adam: Yeah, I, you, it's actually interesting you bring this up, because this is something I think about the culmination of a lot of the projects that I've worked on. Like, is this commercially viable? Is this something that I can start to work on and bring to the forefront more than it is?

I'll be honest with you. I've tried to work with a variety of organizations that creating tools and either it just sort of spectacularly fails or they use it for a time, which is great, and then different management comes in or different priorities and then it collapses.

What I've found works better is providing guidance and, you know, it's like consulting. Free consulting that typically has more success in terms of adoption. But when these tools fail, the thinking is, well, should I pick it up? Should I try to push this forward, get funding, get it... It's hard sometimes to do it.

Cameron: You are living a continuous optimization problem as an academic, right? You know, to be an academic is an optimization problem. You've got so... you've got an unlimited number of things you could be doing.

Adam: That's a great point.

Cameron: And you have to decide how you're gonna spend your time.

Adam: That's a great point. And that's the issue. You have to really believe in that product and believe there's some sort of niche market out there, and these are interesting questions to have. But then there's also the research, the other research that I'm working on, or teaching, or student supervision, and there's lots of service. There's never a dull moment there, so I think it hasn't really been right so far in terms of, Okay, like, let's get behind this tool and let's take it to market. Maybe that'll change, but...

Cameron: if you do that, it means that you will not do something else.

Adam: Exactly.

Cameron: Could be your next research project. It could be, spend less time with students.

Adam: Mm-hmm.

Cameron: As you said. So that's a tough, tough choice.

Adam: And I think at this point in my life, you know, I have two young kids. I'm busy. Taking a risk is something that I might want to pursue in a few years, but right now it's pretty busy. So I think, at least until everybody's a little older and we get, I get more time to sort of think about these things, I don't think my startup ideas are gonna come out anytime soon.

Cameron: Yeah. You don't just want to run the academic at 120%.

Adam: Yeah. Uh, you know, ...

Cameron: That's what they were suggesting with the MRI machines! Yeah. Why would you be any different? So there's a whole career path possible for academics, just connecting to industry professionals or, you know, sector professionals, whether it's non-profit or healthcare or for-profit companies. You know, you could turn your career into that, but no matter what you do, you are still gonna have a teaching responsibility as an academic. So tell me a little bit about helping students understand how to do this stuff.

Adam: I've tried to think very thoughtfully about this, but definitely my thinking has matured over the years as I get more experience, but also with the COVID-19 pandemic and then the rise of generative AI as a personal support tool. I think nowadays, my thinking here is, on one hand, students need to know that this concept exists because if they can't even ask the question, then ChatGPT is not gonna help you. And even if ChatGPT somehow stumbles on this particular tool or set of analytic skills, if you've never seen it before, you're gonna be hard pressed to use it.

So first, existence. And then this notion of working with AI to really understand what is true, what is not true, and understanding the tool from an applied perspective. What kind of questions can be answered by it? When is it appropriate to use? What kind of technologies do you need to implement it?

And so, if they do this, and then we, we actually go ahead and implement these analytics tools to solve a variety of problems and, and hopefully show that it can be done, the hope is that they go outside of the university and think about this because the barrier to entry now has been reduced.

Cameron: Mm-hmm.

Adam: That is, I think, what I hope to bring, at least nowadays. Again, that's changed quite a bit over the years, but today, that's the goal.

Cameron: You've got, I think, two groups of students kind of implicit in what you're talking about. Mm-hmm. There's the ones who have to manage this stuff.

Adam: Right.

Cameron: So they have to be able to ask those critical questions. They have to understand about assumptions.

Adam: Mm-hmm.

Cameron: Right. They have to understand about the complexity of the problems that they're doing. They have to understand about how constraints affect these models. Right? So they've gotta be able to interact with the people who can actually do the stuff.

Adam: Yes.

Cameron: Now, do you also have students in your class who got the math skills to execute that stuff, or is that not where you're at yet?

Adam: You know, I think I try to teach it at a level where you don't need necessarily to be a math whiz. I think I use math as a language.

It's about articulating precision. About what you mean when someone writes, you know, a two paragraph spec about what they want. Right? So math is the language, and that language actually does a very good job of turning into technological tools or programming code. And so that I think is what I try to espouse.

It's like, Hey, this is, you're learning this as a way to communicate your models or your ideas to people, but invariably, you know, once students start to get good at that, the next step is, Well, what story do you want to tell? And I think one of the things that we have really stressed, at least in my department and at Schulich, is just being very intentional about, Hey, I'm gonna tell you a story. This is what it's all about. And you want to be rigorous with the tools that you use, but you want to weave together some coherent narratives so that you can, one, convince people that your recommendations are true, but that your ideas have value.

And I think in the age of AI where basically it's easy to do a lot of things, that storytelling is still vitally important.

Cameron: Mm-hmm. What kinds of stories are you getting out of your students then, in this process?

Adam: Yeah. Well, so I'll give you one example. For my fourth-year undergraduate course, the course culminates in a final project and I make it very open-ended. So last year we talked about fairness and equity in assigning high school... or I guess it's elementary school students, to high schools. Because in Toronto there's a lot of specialized schools, and people apply so they can maybe, you know, be at one school because it has some sort of program that other schools don't.

Now, historically, many years ago it was all about meritocratic process. And then sort of recently they changed that to being equity based and no meritocratic aspects at all. And so we talked about what's a good balance. What was interesting is that there's no right answer to this question, right? There isn't. You can argue whatever you want, but using a data set that I collected and gave to them, and using the tools that we discussed in class, all these analytics tools, you could evaluate different notions of equity and fairness and how much meritocratic aspects you want to incorporate into your solution.

And so then you have this discussion. It's not about the models, it's about, Well, what story do you want to tell? Like what, what kind of... what kind of things do you want to emphasize? It's great 'cause the last class, they present their solutions, we talk about these different aspects, and it sort of just goes to show you then the real world, there is no solution. It's really the story that matters.

Cameron: Mm-hmm. Yeah. That must have been a fun class, the last one. Did any of the students, you know, kind of see themselves in that particular data set? Like, you know, that they were the person who was an athlete that wanted to go to this particular school 'cause they had a great athletics program. Did people resonate with that?

Adam: Yeah, there was one student who's, I think it was his sister didn't get into a school, and it was an art school and she was very passionate about I don't remember exactly the discipline, but she'd done a lot of extracurriculars in this one domain because she was so passionate and then didn't get into the art school because I don't know.

So, definitely resonated with this student. And in fact, they recently, I guess it was late last year, changed the rules again from equity all the way back to a meritocratic focus. And again, you know, there are problems with the extremes in both ends. So it's still an open question. It's like, what's the best balance?

Cameron: Yeah. Well, I mean, at least they're trying other solutions to see how it works. Right? You can get very stuck in a bureaucratic model that says, this is the only way of thinking about a problem. So maybe that's one of the benefits of the kind of research that you're doing, is not simply trying to optimize what people are doing, but helping them see that there might be a different way of thinking about the problem in the first place.

Adam: And one of the things that I think is important is being very intentional about the process. Because...

Cameron: The research process?

Adam: The research process.

It's not only deciding on what models to use, but what data to include, what tests to run, and being cognizant about the stakeholders that are associated with your solution.

Just because you might have biases that you are not thinking about, had you not thought about these problems. And so the analytics model, being very intentional and very, very transparent about what you're doing is helpful because it leads to better research, but then also you're just more cognizant of about how your research sort of fits in into the broader question that you're trying to answer.

Cameron: Yeah. It's a fascinating set of tools that you've got, and I love your openness to understanding complex problems. That's fantastic.

Thank you, Adam,

Co-producer Andrew Micak: A quick follow up for you actually.

Adam: Yeah, please.

Andrew: I was actually very interested by your discussion of talking about mathematics and AI as a language actually.

Adam: Yeah.

Andrew: So how do you teach them how to read the language?

Adam: Yeah. That's a good question. So, I mean, these are the courses that I teach. It's just, you know, they've seen what a variable is, from calculus or from grade... I mean, do you remember like plotting Y = MX + B?

Cameron: Oh, yeah. Algebra.

Adam: They know what a variable is and then each problem is very similar, so the structure is all the same. You have an objective, you have constraints. And then it's just about thinking about how these different variables relate to each other, and how you would create constraints that are meaningful in real life. And so all our classes are really just about this, like, how do you express yourself this way? How do you express yourself that way? And, um, the assignments are just, I guess, more of the same, except with a technological tool like Python.

Andrew: So, is that how you learned it or is that the way you taught yourself to learn?

Adam: See, when I learned it, it was very different. It was very technical, much more mathematical. We're talking about proofs and... I'm not trying to date myself too much, but like, these tools, like the Python, the libraries that you have, they didn't exist.

So it was a lot of more paper-based type of writing.

Cameron: So students just need to know which functions to link to in order to process this stuff.

Adam: Because if you give them a problem, they'll put it to ChatGPT. That's the first thing they'll do. But then they have to be the person who validates the answer.

Cameron: Yeah. They're the oncologist saying whether this is actually a reasonable treatment plan.

Adam: Exactly. And the thing that happens is, it's not that it's all wrong. It's mostly right, at least when I interact with it to create the assignments. But there are these sort of small things that are not right, and part of them, part of the incorrectness could be like a syntax issue or maybe like some sort of error that it makes, but other times it's like, are you sure it interpreted what you meant in the correct way?

Like what you're... and they need to be able to dissect that, if I may. And so what we do is just solve example after example and every class there's like a new complexity that is added and we solve more examples and by the end, hopefully then they kind of have learned the language and are exposed to the complexity associated with these types of models.

Andrew: One other question.

Adam: Please.

Andrew: So you said there's really no wrong answer.

Adam: Yeah.

Andrew: With these things. So how do you teach ethics to your students in these classes?

Adam: So there's no wrong answer insofar as you can make any recommendation. What is important though, is that you back that up. And so it's backing it up with the quantitative modeling we talk about in class. So we have a data set. We have a problem. You need to create the appropriate model. It needs to follow what you said you were gonna do. So if you have certain aspirations or you want your model to address certain ethical concerns, well that has to manifest itself in the modeling. But then when you're doing the assessment of the model, these kind of ethical concerns have to somehow be represented.

And so one of the discussions is, or that we've had is, Well, what about these people? What about these situations? And in fact, it's brought up a lot of times by the students themselves. So, it's really about how you tell the story and how you close the loop associated with introducing the relevant notions that you care about in this particular model.

Making sure that the model reflects that, and then addressing concerns associated with aspects of that you might not have optimized for, if that makes sense. So it's a story that you have to tell and that's why I love the assignment, because it's really about telling a story versus, Oh, this has gotta be the right answer.

Cameron: Mm-hmm. Thank you, Adam, for being on the podcast.

Adam: Thank you. My pleasure. It was a real, real treat.

Cameron: Great.


Patient lying on the bed of an MRI machine. (photo credit: Accuray, Unsplash)


Links

Adam Diamant’s faculty webpage

Adam Diamant on Google Scholar

Research

"Managing Scarce MRI Capacity in Overloaded Queuing Systems"

“Learning to Optimize Contextually Constrained Problems for Real-Time Decision Generation”

Credits

Host and producer: Cameron Graham
Co-producer: Andrew Micak
Photos: York University
Music: Musicbed
Tools: Descript
Recorded: January 23, 2026
Location: Toronto

Cameron Graham

Cameron Graham is Professor of Accounting at the Schulich School of Business at York University in Toronto.

http://fearfulasymmetry.ca
Next
Next

Episode 053: Moshe Milevsky