Bellarmine Arch

Faculty

PODCAST: Episode 22

Bellarmine on Business

Generative AI in Business and Education

Episode 22:  In this episode of the Bellarmine on Business podcast, Jim Ray hosts Alisha Harper and Fred Duff to discuss the implications of generative AI in business and education. They explore the differences between AI and generative AI, the ethical considerations surrounding its use, and how it can be effectively integrated into educational settings. The conversation emphasizes the importance of teaching students to use AI responsibly while enhancing their critical thinking and soft skills. The episode also highlights the evolving landscape of AI technology and its potential to transform decision-making processes in business. This conversation delves into the complexities of large language models (LLMs), their applications, and the ethical and legal challenges they present. The speakers discuss the current state of AI governance, the implications of intellectual property in AI-generated content, and the importance of educating users on AI tools. They also explore the future of AI models, the significance of prompt engineering, and the human element in AI development, emphasizing the need for a balanced approach to harnessing AI's potential while mitigating risks.

 

 

Chapters

00:00 Introduction to Generative AI in Business and Education

05:59 Understanding AI and Generative AI

10:04 Ethical Considerations in AI Usage

14:55 The Future of AI in Business and Education

26:12 AI Governance and Ethics: The Wild West

28:58 Are there Risks to Using DeepSeek?

32:11 Intellectual Property Challenges in AI

37:30 Zero Trust and Data Protection in AI

39:14 AI in Education: Preparing for the Future

40:53 The Role of Prompt Engineering in AI

43:26 The Human Element in AI Development

44:59 Closing Thoughts

Introduction to Generative AI in Business and Education

Jim Ray (00:04)

Welcome to this episode of the Bellarmine on Business podcast sponsored by the Rubel School of Business in Louisville, Kentucky. My name is Jim Ray. I'm a 2008 Executive MBA graduate of Bellarmine University. I've got a regional small business consulting firm right here in town. I'm very proud to welcome two of my friends from the Rubel School of Business faculty, Alisha Harper and Fred Duff. Alisha, how are you?

Alisha Harper (00:24)

I'm good, Jim. How are you?

Jim Ray (00:26)

I’m good.  It's nice to have you back in the studio. We've done this a few times, so it's awesome. I love today's topic where we're going to be talking about generative AI in business and education. And along with that, I'd also like to introduce to you guys Mr. Fred Duff, who's also an adjunct professor, but he's got a pretty interesting background.

Fred Duff (00:43)

Hi Jim, yeah, thank you. Been around a little bit.

Jim Ray (00:47)

Do me a favor, Alisha, let's talk a little bit about your background to bring the listeners into the circle here, so they kind of know who's talking right now. You actually are the Executive Director of Graduate Programs at for the business school at Bellarmine.

Alisha Harper (00:59)

Yeah, I'm the Executive Director of Graduate Programs in the School of Business. And so, I really oversee our MBA program. I oversee some of our executive education programs. I'm also an associate professor of accounting. This is actually going to be a little bit new for me. The last two times that I was here, we were talking tax, right? I'm a former attorney for the Internal Revenue Service, so tax is my background.

But a couple years ago, a little thing called ChatGPT hit and I got pretty interested, pretty excited. So, I started figuring out, okay, well, what can I do? I don't have a technical background, but I am really enjoying learning about GPTs in general and incorporating them into the classroom because I think this is such an amazing tool. And so, we're going to talk about not just generative AI, but we're going to talk a little bit about what's the difference between AI and generative AI.

I love having Fred with me because Fred has the technical background, whereas I'm just in the classroom having some fun.

Fred Duff (02:01)

Well, you know, I'm honored really, I think highly of both you guys and obviously love Bellarmine. I'm an undergraduate from Bellarmine and also a graduate from Bellarmine's MBA program with the analytics track.

After I got the MBA analytics track, I told Dr. Mattei I wanted to learn more and he directed me to the University of Texas.  They had some pretty interesting stuff. I looked into it and I got a postgraduate degree in business analytics and data science.

When I got done with that, which was a pretty intensive program, I was with people from all around the world and I immediately got back into another one, with a postgraduate degree in artificial intelligence and neural networks for another year. And I got done with that, took a couple other classes, and then I got into another one on a postgraduate degree in data engineering with MIT. It's a collaborative. And I've probably got 20 or 30 different certifications in between there, just completed one in data science on cloud and trying to become an expert in AWS and Azure and distributive computing.

Jim Ray (03:08)

Well, friends, if you just heard those credentials, that's why I was excited to get this episode going. Because again, you've got somebody who's really good at using it, who enjoys using it from a consumer standpoint. And then I've got somebody like Fred Duff over here who's got the technical background, who's actually going to be able to explain it even deeper for us. So, I really appreciate it.

Fred Duff (03:27)

And I’ll interject if I can, because I always enjoy hearing Alisha say that, you know, Google found out a long time ago that the people that excel in data science are not the people that necessarily grow up through the technical capacity. It's the ability to think on multidimensional levels, to be able to think innovatively, that really seem to excel in this.

Jim Ray (03:50)

That's how it is.  Understanding how it works is one thing, but what can I do with it is a totally different aspect.

Fred Duff (03:57)

You cannot think in a siloed manner and it's completely realistic and normal actually for an attorney and accountant. In my case, I was a sales manager. I'm in real estate. A lot of people around town know me in real estate. But those areas of how it works in the real world are why I'm particularly interested and perhaps that's one of the things that you liked about it.

Alisha Harper (04:23)

Yeah, yeah, no, it was really exciting and I kind of delved in and started learning for myself and then realized very quickly that this was something that probably needed to be brought into the classroom.

Jim Ray (04:35)

That makes a lot of sense. Now, we've spoken before and you tend to really like one of GPTs called Perplexity, right? That's one that you tend to enjoy using.

Alisha Harper (04:44)

I do. I have versions that I use. I use both ChatGPT and then I also use Perplexity. As an academic, one of the reasons that I really like Perplexity was it automatically pulls the citations. Where did it get its data? It's real easy for me to just click on a link and verify that what it's saying is accurate.

One of the biggest things when you're dealing with particularly generative AI is the idea of what we call hallucinations. And I'll let Fred tell you a little bit more about what a hallucination is and how it comes about. But at the end of the day, it's one of those things that it's almost like gen AI is making up an answer or a solution or a citation. And one of the things that I really like about Perplexity is the ability to have that citation right there.  Just being able to click on it and it immediately takes me to where in the web it got that information.

Jim Ray (05:45)

It gives a little bit more confidence in citing it or using it, right? Well, let's do this. Why don't we, like all things, why don't we start at the beginning? And Fred, could you help us understand what is AI and then what is this thing called generative AI?

Understanding AI and Generative AI

Fred Duff (05:59)

Okay, well maybe it's a good place to start with giving just a brief overview of how we got where we are. So, the common vernacular of AI or artificial intelligence, I like to say it's not artificial and it's also not intelligent per se, but it started or has its roots, our modern version I guess you could say, started back in the 40s, in the late 40s with the US Department of Navy.

They brought together neuroscientists and particularly child neuroscientists combined with individuals trying to be able to understand mathematicians.  It tried to see if they could see in the distance and recognize the numbers on the side of a ship. That was extremely critical because the friend or foe capacity after World War I, etc.

So, they developed a building block of what we have and we still use today. It's called the perceptron and it is nothing more than a switch really. That switch, believe it or not, is modeled after one human neuron which we know a lot about and so they took that switch and it was developed more in the 50s. There was a guy named Rosenblatt who predicted that computers would be able to know, communicate, walk, talk, see, and other things. And he's almost been proven surprisingly very, very accurate.

You take us forward a little bit. The problem back then is, of course, we were dealing with transistors. The computing capacity was not there. However, they did have some impressive results. And you take us forward to where in 2014 through the Alex project with Microsoft, and I believe a Canadian individual by the name of Alex, there were, for the first time, image recognition to where we could now recognize images at a higher level than human beings. That was through an activation function called the ReLU. And what you're basically doing there is you're now going from on or off to maybe. And it's a little bit more complicated than that, but that's a good assumption, I guess.

And so, combine that with the revolution that we had with throwing data online, having the access to the data and now having computing capacity. What we've done is we've now democratized the computing capacity and the algorithms. Many of these algorithms have existed for some time to where we are today. Back in 2017, the University of Toronto along with Google were trying to work on a project to translate language. They took an RNN, which is a linear artificial neural network, and were able to paralyze that through the use of something called a transformer, where now we can read a lot of data and interact with that. And that is the modern generative AI product that we have today.

Google came out with BERT, which is the background of Google, you know, the Google search, your YouTube, et cetera. And here we are.

Jim Ray (09:31)

And now it's out there, so now we're trying to figure out how best to use it. And there's still some fear and intimidation out there, I guess, about where it's all going to go. But it's an amazing thing. I talk to people all the time in business who are trying to use it. A lot of marketers are trying to figure out how do we put this to work? But also, when it comes to the classroom, I think that that's another issue. Alisha, let me kick it back over to you then. In the classroom, let's talk about some of the perspectives there. How are students engaging these AI tools and what are some of the ethical considerations?

Ethical Considerations in AI Usage

Alisha Harper (10:04)

I'm really glad that you asked that because I was having a conversation earlier today and it comes down to the question of there's so many faculty, especially when CHI-GPT first came out, that were afraid to use it. Students are going to use it to cheat. Students are not going to learn to use it to engage in critical thinking. Students are, you know, it's going to defeat soft skills. It's going to defeat communication. It's going to defeat all of these things. It's the ultimate crutch. Exactly. But I think the reality is it's the complete opposite.

I think students that double down on soft skills, right? It's one of the beauties of Bellarmine, right? We are this liberal arts institution that engages in soft skill development. I think generative AI specifically is going to be a fantastic tool to do some of the more technical things, but you're still going to need that human component to double down on those soft skills.

The other part of it is our job as educators. It’s to teach them how to use generative AI as a tool. And we can bring it into the classroom and teach them how to engage in prompt engineering, which is how do I get this tool to give me the product that I'm looking for? And I can utilize this tool to have them engage in critical thinking, to have them engage in communication.

We do a great example in one of my tax classes where we talk about data analysis, where they pull financial information and they ask GPT or Perplexity or Gemini or Copilot or whichever GPT you want to use to provide a summary of the data, which they could do themselves. It just does it much quicker. Now, one of the things that I always emphasize is verify, verify, verify.

Because at the end of the day, it will hallucinate. It will make things up. And so, you have to know what you're looking at. You have to still have those technical skills to verify it. It just makes it move much quicker. The other thing that I found is from a tax perspective, it's not at this point.  It’s excellent at legal research and delving into the exceptions that are part of the code.  We play around a lot with having them do memorandums where they do tax research using GenAI. But then they also have to go in and actually do the research to verify what it got right.

Jim Ray (12:34)

That makes sense. But there's also going to be some ethical considerations where there's a potential over-reliance, right? On, well, hey, I've got this thing. It can just write most of my paper and I'll change the first and the last paragraph and there I am. That's not the desired outcome either.

Alisha Harper (12:50)

No, we don't want that outcome. And I think that's where we as educators can really have an impact. If we teach them how to use AI and we teach them both its power and its limitations, that's where I think we can tackle those ethical components. From a legal perspective, there's also that conversation of, as a lawyer, if I use AI to draft something and I submit it to the court, it's wrong, that's still malpractice by me. Absolutely. And so, me being obviously in accounting, I have to make that clear to my students because they're CPAs and they are subject to the rules of malpractice.

Jim Ray (13:28)

That makes a lot of sense. I like, you know, looking forward at Bellarmine's program, you know, the fact that we can actually begin preparing students for a career that actually is going to utilize AI. I mean, this is the time. This generation, this is the generation that has always grown up with device, right?  They've always had a cell phone, probably from a very young age, and now they're on the forefront of actually being heavy adopters of not only social media, but now AI. And I'm just looking at these guys going, you're getting all the breaks, man. We're Gen X. That was a different game.

Fred Duff (14:02)

I talked about this yesterday and I'm sorry to interject but it's the tech enhanced business professional that is going to succeed in the future. You can't avoid it, it's here. Now, one of the things that drives me little bit wonky is that the expectation that these generative models are the solution to all our problems. Actually, there's a lot of different modeling techniques that might not even utilize artificial intelligence that are more accurate. And so, it's about selecting the proper tool for the proper problem.

Artificial intelligence requires a lot of data; a lot of training. It's actually computationally expensive. If you don't have a lot of data, it might be more effective for, and more accurate for, a different type of model, which is the basis of really what my class is about.

The Future of AI in Business and Education

Jim Ray (14:55)

Yeah. Well, and that's a great segue. What I'd like to do is Fred, is to have you dive in on the technical and practical perspectives. You know, you've got use case scenarios for analytics, predictive modeling, the data analytics. Like you said, the tech equipped professional moving forward. I think everybody's looking at the data these days.

Fred Duff (15:12)

Yeah. I get into conversations about this.  I've got people reaching out to me on a fairly regular basis. I heard a CEO of all things say, Fred, you know, so I don't know how I'm going to monetize this. And I stepped back and I just say, my gosh, are you kidding me? I said, so what this is about is this is fundamentally about making better decisions quicker. And if you can't understand that that’s the essence of running anything, that's a real problem. Maybe you need to step back and get a little bit more education on this stuff.

I think the first thing to recognize is that this is not a change of the world per se. Rather, it's going to speed up just like any other development that we've had.  However, I think that this has the capability like Alisha and I were talking yesterday of truly being unlike anything that we've ever seen because you are really bringing all this data, all this reference capability, all this statistical capability to make better decisions faster. You know, it's just amazing.

Jim Ray (16:25)

Well, what I love about it is it's going to take away a lot of the subjectivity. Ideally, it'll take a lot of the subjectivity out of the analysis, which, okay, maybe I want to plug that back in for some specific reason, but at least I'll have the right numbers.

Fred Duff (16:37)

The concept of bias is something that we deal with as human beings.  We have bias and the recognition that bias exists. If you don't first make that assumption, then you're really starting off on a bad ground. A lot of the modeling we do addresses directly bias, which is no different than what an attorney deals with or an accountant or anybody else.

Jim Ray (17:01)

Let me ask you this, and you teach some of these classes at Bellarmine as well. How does it foster the collaboration in the classroom? Alisha, you talked about that a little bit, about how you're running the cases, but you're also doing your own work on this side to see if there is a variance and why would that happen? But how are you seeing it in the classes?

Fred Duff (17:20)

I think really GPTs in general have an incredible capability for students to interact and learn faster, better than ever before. You basically have in your pocket, a professor. Now, the question is, how was that GPT modeled? And we get a little bit into how these models are trained.

If I can kind of go off the subject a little bit here, I've told Alisha before, when you're going out, I encourage you to try different GPT models because they have personalities. They develop personalities. And so, they may be better in math in one of them or programming in one of them or other aspects. And your capability to interact with them is really where they become powerful. And that's where prompt engineering comes into place.

But with my class, we start off in the beginning.  I teach particularly, you know, there's an Excel class that I teach occasionally on the MBA class, and then also I teach the Capstone 667 class where I teach on Python. We go from the origins of machine learning, unsupervised and unsupervised learning, and then we end up in the artificial intelligence ring. But understanding how they make decisions and the nuances really is critical. It's an elegant method to be able to get a solution. It's documented.

Jim Ray (18:49)

That makes sense. From a practical standpoint, you said earlier, we've got this computer or this assistant right here in your phone, in your pocket. What I worry about are the students who come out or the younger people who come out and don't know how to actually do the work. They don't know if it's biased. Well, if I don't have my phone or something happens to it, I can't get to the answer. And I think that's an inherent risk of over-reliance.

Fred Duff (19:17)

I tell this to my students. I introduce to them front up, you know, I encourage them to use the tool. However, I tell them don't shortchange the reason why you're here. Okay, so it's a great thing to be able to go in reference just like if you had the access to the library. But if you rely on that, then you become so reliant upon that, then when this is turned off, you don't understand it.

Furthermore, I will tell you that you cannot operate this stuff really at effective level if you don't understand the basis of the other stuff. And so you become more capable knowing the other material, whatever it is that you bring in law, accounting, or math, or whatever, that is going to allow you to interact with it and be far more powerful.

Jim Ray (20:11)

To build off of that, were talking about interacting and becoming more powerful. You made a comment earlier about each of these GPTs having their own distinctive personalities. They're better for certain applications. I would think it would be a good idea to not just hone in on one GPT. That's kind of why I brought up the idea of Perplexity. Some people may not have ever heard of that. But there's something different that Perplexity does that others may not. So, you may find out, hey, I've got this great toolbox of multiple GPTs. This particular tool happens to work well in this situation and then maybe learning all of them. I hadn't even thought about that until we started prepping for this episode, actually.

Alisha Harper (20:49)

I think that's a great point. One of the things that I do in the classroom is I have them utilize different GPTs. Bellarmine is a Microsoft school. And so, one of the things that you always have to think about when you are using one of these generative models is this idea of you're putting information out there into the world.

In terms of bias, terms of misuse of information, in terms of intellectual property, all of that, you have to ask this question, if I am putting this into a GPT model, is it protected? And the answer is probably no, because you are giving it to the world. It is learning based on what you are doing.

Jim Ray (21:36)

And that gets us into this whole topic of the ethics, right? The ethics of not only AI, but what you're doing with AI. Let's talk about some of the broader ethical concerns in AI development and deployment.

Fred Duff (21:53)

Just last week gave my students the capability to operate with these models without allowing them to train. So if you're going directly to their websites, then you're opening yourself up whatever you put in there could be shared. But, you can operate some of these models outside of that. In fact, the models and the development of where these things are going they're getting smaller. They're getting better because they emulate the human brain process and we're getting better about synthesizing and understanding how that works most recently with R1 or as some people might hear DeepSeek.

Jim Ray (22:33)

Which is the Chinese one that recently hit the news?

Fred Duff (22:36)

It's also got American people working on it through MIT and stuff. I believe there's going to be a lawsuit based upon how it trained and stuff. But what they did is they used pathways instead of trying to go out on a brute force level and do, you know, I don't know how many on their training. They trained about 10 % less, probably even less, with a lot less GPU capability because they were able to say, okay, we're going to have these areas of expertise that we're going to call upon to be basically to go out and search whatever your question is and bring it back. GPT only uses eight. They use a lot more. And what they found is by having it networked like that, they were able to achieve a higher response rate and a better accuracy rate that is pretty close to GPT.

Jim Ray (23:39)

Let me ask you just a couple of really quick definitions before we get too far into this. What does GPT actually mean?

Fred Duff (23:45)

All right, it's generative pre-trained transformer is what it is.

Jim Ray (23:51)

And that's a generic term. A GPT is a GPT, right?

Fred Duff (23:54)

Yeah, it is. So, you know, we talked about earlier in 2017 where the University of Toronto with Google had developed that model for language, which I think is really important for people to recognize that these are language models, models where we tokenize the words and the phrases and we try to understand mathematically how they occur in what's called the corpus of knowledge.

Jim Ray (24:20)

And this is what we're hearing is LLMs, large language models. Is that what that is?

Fred Duff (24:25)

Exactly right. We found that large language models happen to be good at other stuff, which really shouldn't be surprising if you think about it, because language is extremely complicated. It's mathematic, it involves logic. It really is a development of the human species. It's one of the things that really separates us.

And so they found that these large language models, which we are ahead in the large language models over the image and visual models and other models, but they found without really training them that they could arguably be as very close to some of the best trained, what we call convolution neural networks or CNNs. Now that's where you've seen Dali and some other things.  They've experimented with using these large language models and found that they're actually pretty robust in those areas.

Jim Ray (25:15)

Dali is where they're actually generating images?

Fred Duff (25:18)

Yeah, it's tuned into, we hyper-tuned into being able to image generate. Now understand that, the generation of these things is just a mathematical consequence of looking out over the broad spectrum of data and making a mathematic decision, probability. We talked earlier about hallucination, it's very similar, but these large language models, are not capable of long-term memory and they do lose it. They start hallucinating after 20,000 and 30,000 strings or words. Or not words, but characters.

Jim Ray (25:58)

That's when it just starts being made up more or less.

Fred Duff (26:01)

It is.  What it does is it's going back instead of your conversation with it, it's going back to its data and it's trying to pull in something to be able to inference to be able to drive value.

AI Governance and Ethics: The Wild West

Alisha Harper (26:12)

Well, I think the other thing we need to think about, there's two things that I would say to that. One is, you talked about the lawsuits, AI governance and ethics. I mean, that's just a huge area right now. If you're looking for an area to research, that is it. I've got a colleague and we are actually going to be presenting at a conference on AI governance and ethics because in the US.  We don't have a huge governance structure right now. It's more of a self-regulated area.

The EU has done a very good job. They're way ahead of us in terms of what that is. And I would assume we're probably going to adopt and move forward with some of what they have already done. But you hear that term, it's the wild, wild west. It really is the wild, wild west. So you're going to bring a lawsuit, but nobody really knows what that's going to look like. One of the issues that we're having from a legal perspective is intellectual property.  Is anything that you create using that model your intellectual property? There are some people that say no, because it's learning and it's pulling from information that's already out there. You're not creating. There are other people that would say yes, because it's your prompt that is creating this new idea and this new image.

Jim Ray (27:24)

But it's drawing from somewhere else.

Alisha Harper (27:26)

It is and that's the other piece of it that I wanted to bring up that Fred mentioned is the idea of it's pulling from somewhere and some of it could be pulled from the internet and we all know everything on the internet is true.  So that's one of the things that we need to make sure as educators that we are teaching our students.

Jim Ray (27:43)

I believe Abraham Lincoln said that about the internet.

Alisha Harper (27:52)

I would like to think my students don't go, well, everything that I read on the internet is true. But we need to make sure that we are explaining this to them, that this is a tool. It is not the answer. I will say that as many times as I need to until people hear me. It's a tool. It's not the solution.

Jim Ray (28:13)

That makes sense. Your ability to manipulate that tool is the art in it, but it's still based on science. It's got to find the data somewhere. That's why from a marketing standpoint, I've always had an issue.  Because if I'm writing a blog for a law firm, for instance, if I'm putting together something like that, and I say, okay, AI, I want you or ChatGPT, I want you to go ahead and draft me something like this, and here are a few parameters that I need you to hit.

I don't know where it got that. In my mind, I'm thinking I'm opening myself because it was actually lifted from another law firm, because that's how it actually found it. And if I don't know that and my client, the law firm, doesn't know that and suddenly we're putting our names on it. I’m coming back to an intellectual property standpoint, I think that's a problem.

Are there Risks to Using DeepSeek?

Fred Duff (28:58)

There's a real reason why they don't offer the capability to be reference it. Now oddly, DeepSeek does. And one of the strengths that it has, in many ways they really outmoded us. But first, I think it begins with Llama, which is Meta. They came out with a really important large language model a couple of months ago. It does go through the sequence of thinking, so you can understand where it came about. The reason why ChatGPT and Google's do not is because they don't want to be able to be reversed engineered so that other people can take advantage of the large amount of money that they've spent.

Jim Ray (29:40)

That makes sense. when DeepSeek came out, they were saying, hey, we did this for $6 million. Nvidia plummeted that day and everything else that happened in the market. It’s because that radically could have shaken up or at least seemed to have shaken up all the investments that we're making, not only just data centers, but also all the other follow-on.

Fred Duff (29:58)

Yeah, and I think that largely if anybody's listening this, I would not recommend that you go on the DeepSeek’s website and start using it. But there are if you want to use it, let me know there's ways to be able to use it in a safe way. I think that this is all good for knowledge in general for us to be able to develop these models. I mean it was going this way already MOEs were they exist in ChatGPT-4

You know, they help us reason better. So now we're going to see this in a year from now, is that more models are going to come out on how they're able to use these more effectively. There are some weaknesses with R1, but it is pretty impressive on what they were able to accomplish.

Jim Ray (30:41)

If R1 is kind of taking the silos that have already been established by, let's say, traditional, I know we're too early in the game to say traditional, but the traditional GPTs that we maybe knew about, are we getting one or two degrees removed from the original source? So, do we start to get fuzzy when we're using something like a GPT?

Fred Duff (31:00)

That's a great point and I think about that. Using other models to train your model is what is at stake here, okay? And when you're talking about the pre-training, they're training on the entire internet. So that's a corpus, what we call the corpus of the internet.

Now, on top of that, Google and Microsoft and those guys had hundreds of thousands of hours of human interaction with that data to be able to tune the models to make them operate correctly. Now that's not cheap. DeepSeek did not do that. What they did is they leveraged the work that was done by another model. So, they saved themselves a tremendous amount of money in that pre-training. Now, is that bad? I mean, you know, we're talking about copyright and trademark and whatnot. That's going to be an interesting court battle, but I think it's just general development.  It’s progress on how are we going to go from here and on what we're going to do. I actually think that it could be that we'd be looking at using models and you might be using not just one of these GPTs, but you might be simultaneously working with four or five different models to get a better response.

Intellectual Property Challenges in AI

Jim Ray (32:11)

Well, with that being said, let's jump back out to the macro, the global level. How is all of this influencing the global collaboration? Obviously, you've got IP issues, intellectual property issues out there, but you've got trade secrets. You've got all this other stuff out there that if I'm uploading some of that into a model, we can call it a GPT or whatever we want to call it, I'm uploading it into some kind of AI tool. Am I at risk of exposing that and thus basically severing the protections I would have had?

Alisha Harper (32:39)

Yeah, absolutely. One of the things that a lot of companies are doing, they're actually creating technology use policies for their employees. What can you utilize it for? What shouldn't you utilize it for? You also have companies that are building their own internal GPTs so that they can use them and feel like they're protected.

One of the things that at Bellarmine, because we're a Microsoft institution, we use Copilot. And so, Copilot is part of our license and grants us a much greater protection in terms of putting data in. At this point, I think a lot of people are still very hesitant, even if it's your own model of putting in your intellectual property, because if it does get out there, there is that question of, did you fail to protect it?  Which is something that you have to do in order to maintain protection in the U.S.

Jim Ray (33:40)

And that makes sense to me from a legal standpoint.  It's basically an insurance policy. The way I hear you say it is that now Microsoft or whomever comes back in and says, okay, you know what, our tool failed, maybe there's a lawsuit or whatever, and they pay. But your intellectual property, which is really your growth platform, is now exposed. So, thanks for the check, but the damage is done. And I don't know if that's really going to weigh it out because now you've got to look at going forward with your trade secrets or whatever. These are little things like this, just as an amateur, as I'm looking at it, make me a little hesitant to really jump in wholeheartedly. Fred, when you and I first met, we talked about the idea that there is a way to sandbox the data.

Fred Duff (34:22)

Yeah, so I was going to say that. Yeah, actually what I see companies doing is you don't have to reinvent the wheel and you can have those protections. They allow you in some cases Llama, which was monumental, Lama 3.2. Now you can download the Llama experience on your cell phone and operate a large language model from your cell phone. And it operates actually at pretty good level. DeepSeek, similarly, if you go to their website, you can do the same thing.

In my class, I teach them how they can call upon these models and you're operating it without sharing. There's a thing called O-Llama, which allows you to operate these on-prem, meaning that you download the models, so you're not part of the sharing and the training. And you're training actually your own GPT at that point. And it'll continue to operate within your paradigm.

You don't know that that's happening is when you're going on to their servers and the ChatGPT, is chatting with the GPT model. So it's interacting. It's that bridge that they're offering. So, I would definitely be very careful if you are an institution or someone and you have vital data, do not put it out on the open servers. It’s extremely important.

Jim Ray (35:53)

You know, as a small business owner, I kind of think of that like QuickBooks. I've got the CD version, which I can download onto my hard drive. But I've got the online version that they are now forcing me to do. And from that standpoint, I feel like I'm a little bit more exposed.

Fred Duff (36:07)

You know, it's not that difficult. It's actually pretty easy to be able to download these models and operate them.

Jim Ray (36:19)

I think operating using the tool sounds extremely suspect. And again, I'm an amateur, but the risk of, I push that button and how do I undo that? the jeannie's out of the box.

Alisha Harper (36:32)

I think this is the important part. I think it's not just us as institutions of higher education, but for businesses out there that are listening, educate your people, educate your employees. You need to put these policies in place. If I'm an employee, I've seen different policies put out there, I've seen different things put out there, and it's not clear and people don't know.

Well, can I use it or can I not use it? Or what can I use it for? What can I use it for in my work? What can I use it for in my daily job? Can I use it to do emails? Can I use it to analyze data? I think employers and companies, businesses, big or small, can protect themselves by having these policies in place. And I know it's one more policy, and we don't all love policies, but this is one that from a tech perspective could really be imperative.

Zero Trust and Data Protection in AI

Jim Ray (37:30)

Yeah, and I think we're moving that direction quickly. More and more people are using tech as part of their daily work tasks. They've got to incorporate tech somehow. But again, you're opening all these windows and all these trap doors and all these things, and you just may not know. Now, it's your business to know. And I think leadership, the C-suite, the HR people, whomever is in charge of this, chief technical officer, CIOs, whomever, need to communicate that throughout the organization, so that we understand the inherent risks, but then enjoy the freedoms as long as we're within those boundaries.

I think right now so many people just simply don't know. Fred, it's one of the reasons I don't go on TikTok. We've had this conversation. I'm worried about where the data goes. And even if I were to use the on-prem version, as you said, of DeepSeek, is it really? I just don't know. So, for me, I'm sorry, guys, I'm going back to one of the others, and hopefully I've protected myself a little.

Fred Duff (38:27)

The concept you are talking about is called zero trust. So zero trust, especially since the Snowden event, you have to assume zero trust and through that you have to evaluate everything. And I think that's good way to evaluate that.

Jim Ray (38:45)

Yeah, I think I'm a very healthy skeptic on a lot of this stuff, just until it's tried and true. And even if everybody else is doing it, mom said don't jump off the bridge just because your friends do, right? I'm kind in that area. Alisha, let me bring you back for this.

The vision for AI's involvement in the classroom, where do you see that going? There's got to be a role in business education, and some of that is kind of what you're doing already. Let's use the model, and then let's do our own research. Where do you see that going forward?

AI in Education: Preparing for the Future

Alisha Harper (39:14)

This is something that I know everybody has heard and I've said it I don't know how many times.

AI is not going to replace people. People that know how to use AI are going to replace those that don't. And that's one of the things that I think from a business education perspective, I can bring AI into the classroom and I can teach people how to use it. I don't need to know how to create it. I don't need to know that it's a GPT and how do you code it and how do you create it and how do you teach it? 

I'm interested in those things and I'm pursuing those things by doing the Master's of Analytics at Bellarmine, myself, because I want to learn. But I don't need to know that to teach my students how to engage in prompt engineering and how to use this tool for their business education. Just like I don't need to know how to create QuickBooks. I just know how to be able to, I need to know how to create, you know, teach my students to utilize it. I don't need to know how to code Excel, but I can teach my students to use it. The same thing is going to happen with this AI.

The other thing that I would say is, you know, we hear a lot of talk about the Industrial Revolution. There's a lot of research out there about what they call the Fourth Industrial Revolution, which is 4.0, and the technological advancements. And there are a lot of people that think that we are moving in and have moved into the Fifth Industrial Revolution, which is 5.0, which is, there's actually a really great book, if you've not read it, I recommend it, called Human and Machine. And that's what this is. It's how we're working together.

The Role of Prompt Engineering in AI

Jim Ray (40:53)

Some days I think I'm still back on 2.0. I guess I'm going to continue here. You know, it's funny when you talked about prompt engineering Fred. I think you mentioned that term as well. For me, that seems to be just a way to refine the search. I mean, if I could go into ChatGPT and say, “Tell me how to build a house,” the prompt engineering would say with three bedrooms, with two and a half bathrooms, with a basement, I want X number of cars to fit in the garage, I want a pool, I want a wraparound port, whatever that is. I think, as I understand it anyway, the prompt engineering is just refining that general concept so the output is right.

Fred Duff (41:26)

So, prompt engineering is very powerful. One of the things that we do know this thing does not do that will not be replaceable is our capacity to be able to innovate and create.

What it allows us to do is take and enhance our ability to innovate and create. And so the power of a liberal arts education cannot be understated here. I concur 110 % with what she had said. But when you're interacting with these GPTs, your ability to break it down and to tell it at the most binary level what you want for it will more likely than not provide a better response for you. And also, not automatically assuming what it came back with the first time is what you want. You need to manage it. You need to play around with it. I encourage anybody out here to be able to do that.

Alisha Harper (42:15)

And you can give it examples. For example, I want this or pretend you're a professor teaching accounting. How do you talk to students about income tax? So when you're prompting it, think about not just what you're asking it, but what you want it to say and who your audience is. I have the paid GPT version. I really like it.

But what I found now is that it's got Python. And so, it can actually utilize code to do presentations for me, and it'll create the PowerPoints. Obviously, I have to go in and I have to fix it. I think I've heard somewhere before that these models will get you about 80%. And then the humans need to come in and do the other 20%, which I think is where you get that human and machine, that 5.0.

Jim Ray (43:10)

Yeah, but from an efficiency standpoint, wow, I've actually got 80 % of it more or less there. The editing is easier than greenfield creation staring at a blank page going, where do I begin? Versus, I’ve just got to move this around and tune this up.

Alisha Harper (43:24)

Yeah, it's a great tool to ideate.

The Human Element in AI Development

Fred Duff (43:26)

So, lo and behold, the advantage of learning a model, and by the way, they can do a wide variety of programming. I've heard that up to 65 % of programmers are now using the GPTs to do their programming. I would suggest it's higher than that because it just speeds up the time. But now think about this. What is it that really is the value of what we do, the highest value of us?

I believe it's 100 % about our feelings, the way we interact with the world, and the way that we make things. Our judgment, which is uniquely human. I think it accentuates the positive.

Jim Ray (44:09)

That's a great way to look at it. That accentuating the positive through a tool that helps you to get there, It does. It magnifies, hopefully amplifies it.

Fred Duff (44:18)

Now, it can be misused. We were talking about analysis. GPTs are language models and their temperature, if you will, which is a technical, anybody that's worked with these, know that because of the temperature that most of them are set, they're prone to make some errors so that they can come back with a wider variety or a better feeling product.

There are other tools and that's what our job at Bellarmine is, is to be able to teach you those tools so that you're going and you're doing the right ton of analysis or to solve the right problem with the right tool. And you're using it to optimize to get your solution.

Closing Thoughts

Jim Ray (44:58)

That's right. I can drive a nail with the back of a screwdriver, but a hammer is much more efficient. Well, guys, let's wrap up the episode today. We've covered an awful lot of ground, and we've talked about a bunch of different issues related to AI. But any closing thoughts or takeaways?

Alisha Harper (45:15)

I think my biggest closing thought is I don't want to overemphasize the beauty of AI and its tool, nor do I want to under-emphasize the need for technical. At the end of the day, if my GPT is creating code, I need to know that that code is correct. Right? So, I've got to understand how to create code and I've got to understand what that code is supposed to look like in order to know if GPT is creating the correct code. So that technical ability is still significantly of value. It's just, I have a tool to expedite that.

Jim Ray (45:56)

That makes sense, to get to the answer quicker, but again, you’ll still to be able to prove it, right?

Fred Duff (46:01)

Yeah, so you know my leading thought would be simply that look we are embarking upon a really exciting time and you know artificial intelligence is certainly part of that quantum computing combined with that and other acts other things.  I would just simply say that while there's a lot of fear out there, I think that we also need to understand that this could be humanity's best time to be able to get us to places to solve problems that we have on a health level from an economic level that we just conceivably had no capacity to able to do previously. So, you know, my advice is come get in our class. Let us help you.

Jim Ray (46:44)

There's always a call to action. Yeah, I may need you for marketing.

Well, friends, first of all, let me begin by thanking Fred Duff and Alisha Harper. Let me thank you all for your time. I know there was a lot of prep that went into this discussion. You guys did a great job of bringing different perspectives. But again, what this is so new to a lot of people, but you guys are already in the game. You're already utilizing it and you're sharing that perspective with us.

I really appreciate the time you took just to prep for this, not only for this particular episode, but by doing what you're doing and how you're already utilizing the tools.

And then, especially for the audience, this has been, as I said earlier, a very broad topic. There's much more that we could do with this topic. I think you'll see this theme come back later on this year through some additional episodes.

Nonetheless, you took some time out of your day today. And I really wanted to thank you for sharing your time. If there's one thing I could ask, it would be that you take this episode, wherever you found it, and share it out on your personal social media. Let other people find out about the Bellarmine on Business Podcast.

Let other people find out about the information we're sharing, but most importantly, let them find out about the Rubel School of Business and the kind of topics and the kind of processes and the kind of concepts that we're thinking through here and discussing. We'd really appreciate that. You could help us out tremendously.

From all of us here at the Bellarmine on Business podcast from the Rubel School of Business, Swords Up and Let's Go Knights.

Thank you for listening to this episode of the Bellarmine on Business podcast.  Please remember to SUBSCRIBE to our podcast, so you don’t miss an upcoming episode.

Disclaimer:

The views and opinions expressed during the Bellarmine on Business podcast do not necessarily reflect those of Bellarmine University, its administration or the faculty at large.  The episodes are designed to be insightful, thought-provoking and entertaining.

Want to Listen to Additional Episodes?

You can find additional episodes on the Rubel School of Business Podcast page of the Bellarmine website, various Bellarmine social media pages, Apple Podcasts, Google Podcasts, Spotify, Amazon Music, Audible, Libsyn, Podchaser and many other podcast directories.  We encourage you to subscribe to our podcast so you don’t miss an episode.

Interested in Developing a Podcast for Your Business or Organization?

This podcast was produced by Jim Ray Consulting Services.  Jim Ray, host of the Bellarmine on Business podcast, can help you with the concept development, implementation, production and distribution of your own podcast.  For more information, visit:  https://jimrayconsultingservices.com/podcastproduction.­­

Tags: Faculty , Rubel executive education , Rubel School of Business

 

ABOUT BELLARMINE

Located in the historic Highlands neighborhood of Louisville, Kentucky, Bellarmine University is a vibrant community of educational excellence and ethical awareness that consistently ranks among the nation’s best colleges and universities. Our students pursue an education based in the liberal arts – and in the distinguished, inclusive Catholic tradition of educational excellence, the oldest and most rewarding in the western world. It is a lifelong education, worthy of the university’s namesake, Saint Robert Bellarmine, and of his invitation to each of us to learn and live In Veritatis Amore – in the love of all that is beautiful, true and good in life.