Uncategorized

Kevin Scott talked about Bing’s quest to beat Google and their future

Unblocking AI Watermarks: A Review of Soheil Feizi, A Computer Science Professor and a Contributed Contributor

Soheil Feizi considers himself an optimistic person. But the University of Maryland computer science professor is blunt when he sums up the current state of watermarking AI images. He says that there is no reliable watermarking at this point. “We broke all of them.”

So, it was really amazing the extent to which having an AI creative partner helped unblock me. But it was still… It was all my trying to figure out how the plot of this book ought to work. And I don’t think it would be particularly interesting to me as a reader to consume a novel worth of content that was 100 percent generated by an AI, with no human touch whatsoever. I have no idea what that is doing.

Microsoft CTO Kevin Scott on Bing to beat Google and the future of AI art: What do we need to know now? What we can do about it

We have arrived at the nature of art, and I will shift to graphics cards. We can go everywhere with Kevin. I just want to make sure we hit it all.

People make art. The artificial intelligence moment has given us the chance to ask that question seriously. The internet has been used to make money. As our distribution channels get flooded, there is a divergence there. We will hit the answer in the next ten minutes, I don’t know.

So, the last time you and I spoke, you said something to me that I have been thinking about ever since. Every dollar of Microsoft’s budget goes into the graphics cards, right here.

It is easier now than it was last time. I thought the demand was there when we were in that moment. Because a bunch of AI technology had ripped onto the scene in a surprising way, and demand was far exceeding the supply of GPU capacity that the whole ecosystem could produce. That is resolving. It is still tight, but it is getting better each week, which is great. There are more good news ahead of us than bad, which is great. It makes my job of adjudicating these very gnarly conflicts less terrible.

There was some reporting this week. Microsoft is heavily invested in small models that don’t demand a lot of compute. Are you making the costs of compute go down over time?

I think we are. We discussed backstage, that when you bill one of these applications, you end up using a full portfolio model. You want access to the big models for a lot of reasons. If you can offload some of the work that the AI application needs to do to smaller models, you probably are going to want to do it.

And some of the motivations could be cost. Some of it may be a little bit slow. Some of them could be that you want to run part of the application locally because you don’t want to transit sensitive information to the cloud. There are so many reasons why you should be able to architect things that have a portfolio of these models.

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

Pricing AI: Towards an Artefact-Inspired AI Pricing Scheme. OpenAI, Copilot, and GPT-3.5

Well, let me deploy my finest press training and say that if you are an API customer right now — like you’re using the Azure OpenAI API or using OpenAI’s instance of the API — you don’t have to think about what the underlying hardware looks like. It is an application programming interface. It’s a simple method for building an application on top of theAPI.

I am looking at Copilot, which is located in Office365. The price is $30 a seat. That is an insane price. I think some people are going to think it’s very valuable, but that’s not a massive market for an AI pricing scheme. Can you bring it down?

I think we can bring the underlying cost of the AI down substantially. The cost for access to the GPT-3.5 platform was reduced by 10 by OpenAI this spring. That was almost all about the performance optimizations. So, the chips are getting better price performance-wise, generation over generation. And the software techniques that we’re using to optimize the models are bringing tons of performance without compromise to quality down. And then, you have these other techniques of how do you compose your application of small and big models that help, as well. So yeah, definitely, the cost goes down. And the price is just what value you’re creating for people. The market sets the price. If the price goes down, then the market tells us it’s too high.

Yeah, we’re getting really good signal about price right now. I think the statement you made is important. It is very early days right now for the commercialization of generative AI. So you have a whole bunch of things that you’ve got to figure out in parallel. One of the topics is how do you price it, and what is the market for it? There isn’t a reason to overprice things. Everybody gets value from them, that’s the thing you want. So we’ll figure that out, I think, over time.

The story is that there is a lot of compute and running tools for customers. It is access to H 100s. The capacity is being built there. They’ve got 80 percent of the overall market share. How much do they represent for you?

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

From Microsoft to PyTorch: What do you want to see in the next generation of pcsystems? — Lisa Su, CEO of AMD, what will we do next?

They are one of the most important partners. And we work with them on a daily basis, on a whole bunch of stuff, and I think the relationship is very good.

I look at Amazon, Google — they’re kind of making their own chips. I spoke to the CEO of the company some weeks ago. He didn’t sound excited that he was dependent on the computer chip maker. They want to move to their own systems. Are you thinking about custom chips? Are you thinking about diversifying that supply chain for yourself?

Going back to the previous conversation, if you want to make sure that you’re able to price things competitively, and you want to make sure that the costs of these products that you’re building are as low as possible, competition is certainly a very good thing. I know Lisa Su, from AMD, is here at the conference. We are doing a lot of interesting work with Lisa, and I believe they are going to have more and more important offerings in the future. I think there’s been a bunch of leaks about first-party silicon that Microsoft is building. We’ve been building silicon for a really long time now. So—

I’m not confirming anything. We have invested a lot of our money in the form of Silicon for years. And the thing that we will do is we’ll make sure that we’re making the best choices for how we build these systems, using whatever options we have available. And the best option that’s been available over the past handful of years has been Nvidia. They have been.

Is that due to the processing power of the chip, or is it related to the CUDA platform? Lisa told me yesterday that we need to raise one level higher. We need to increase our level of PyTorch. The perceived mode is what Nvidia considers to be the thing, and it is not CUDA. Are you in agreement with that? Is it true that you are dependent on the chip? Are you dependent on their software infrastructure? Or are you working at a level above that?

Well, I think the industry at large benefits a lot from CUDA, which they’ve been investing in for a while. So if your business is like, “I got a whole bunch of different models, and I need to performance tune all of them,” the PyTorch-CUDA combo is pretty essential. We don’t have a ton of models that we’re optimizing.

We have many other tools that can be used, like the one that OpenAI developed, which is an open-sourced tool, as well as other tools that can be used to develop high-performance kernels for your project. It is important to remember that you want to make it easy to maximize your hardware resources in production even if you use only one company.

So I asked Lisa yesterday, “How easy would it be for Microsoft to just switch from the Nvidia to AMD?” And she told me, “You should ask Kevin that question.” You are here. How easy would it be if you had to switch to the Advanced Micro Devices? Are you working with them on anything? What will it be like in the future?

It is not trivial to muck around with this hardware. It is all big investments. If that is the way you are building your application, you don’t need to care. And there are a bunch of people who are not building on top of these APIs where they do have to care. All of them have the choice about how difficult they think it is. The customer sees the only part of the software stack that is visible, that is the APIs interface.

The other theme that a bunch of folks at the conference yesterday asked me to ask you about is open source. You obviously have a huge investment in your models. OpenAI has GPT. There’s a lot of action around that. On the flip side, there’s a bunch of open-source models that are really exciting. You were talking about running models locally on people’s laptops. Are these real moats around these big models right now? Or is open source going to actually just come and disrupt it over time?

Yeah, I don’t know whether it’s even important to think about the models as moats. So there are some things that we’ve done, and a path forward for the power of these models as platforms, that are just super capital intensive. I don’t think that breakthrough on the software will become less capital intensive. It would have to happen with all of the capital intensity if it was Microsoft, because it’s not just about what you can put on your desktop but it’s also about hardware and not just software. It’s hard to get scale by just sort of fragmenting a bunch of independent software efforts.

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

Kevin Scott on Bing, Google and the future of AI art: I know you are right, but I don’t know what you are doing, but if you see one, you can delete it

I have more questions. If you have questions for Kevin, please start lining up. I’d love to hear from all of you. I want to make sure we talk about authenticity and metadata, marking things as real, something you and I have talked about a lot in the past. There are a lot of ideas about how to mark content. We are going to see some from Adobe later today. Did you make any progress here?

I believe we have. For the past few years, we have been creating a set of watermarking technologies and trying to work with both content producers and tool makers to see how we can get them.

Text is definitely harder. There are some things that are research-y that folks are working on, where you can, in the generation of the text, subtly add a statistical fingerprint to how you’re generating the text. It is much more difficult to hide a watermark in a video than it is in a picture, because it is easier to hide it than it is to change the experience for people viewing it. So it’s a tougher problem, for sure.

Someone sending me an email that says it was generated from artificial intelligence is the type of email I want the most. When I think about my inbox, that’s what would fix it.

I know what my preferences are for those emails. I’m going to tell Cortana to delete ‘em right away. Fair warning to all of you. If you write me AI, it’s gone.

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

Kevin Scott on Bing’s quest to beat Google and the future of AI art: (Interview with Mark Levinson, CTO, and Adam Dillon)

Pam Dillon was the person. Kevin, good morning. Pam and her family live in the area of Preferabli. The question is not being generated by the website. We have been talking about assimilating the world’s knowledge in a general sense. Do you think about how we’re going to start to integrate specialized bodies of knowledge areas where there’s real domain expertise? What if medicine or health needs a sensory consumer?

Kevin Scott: Yeah, we are thinking a lot about that. In the process of training a model, it can be helpful to add expert input to the data in order to improve the quality of that model. We have been thinking a lot about medical applications.

Peter Lee is one of the people I work with at Microsoft, and he wrote a great book about medicine and GPT-4. And all of that is exactly what you said. Through careful prompt engineering and selection of training data, a model can be very high performing in a particular domain. And I think we’re going to see more and more of that over time, with a whole bunch of different domains. It’s really exciting, actually.

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

How Does the Black Keys’ Music Influence Generating Music? A Question About Provenance and the Future of Artificial Intelligence

Alex: Hi Kevin, my name is Alex. I have a question about provenance. Yesterday, the CEO of Warner Music Group, Robert Kyncl, was talking about his expectation that artists are going to get paid for work that is generated off of their original IP. Today, obviously, provenance is not given by LLMs. My question to you is from a technical standpoint: Let’s say that somebody asks to write a song that’s sort of in the style of Led Zeppelin and Bruno Mars. The Black Keys are a band that sound very much like Led Zeppelin and that is how the LLM is using their music. Would there be a way, technically, to be able to say, from a provenance standpoint, that the Black Keys’ music was used in the generating of the output so that artist could get compensated in the future?

KS: Yeah, maybe. I think that the particular thing you are asking about is controversial for human writers. It is easy for a human to be influenced in very subtle ways, as was the case with Ed Sheeran. A lot of pop songs have similarities to one another.

I think you have to look at both sides of things. What is actual, AI aside, how do you measure the contribution of one thing to another? Which is very difficult. It’s possible to find some technical solutions if we were able to do that part of the analysis. It’s very easy to make sure that you are not having generations that are parroting. It can be either in whole or in snippets. It is more difficult to understand how the huge amount of contribution that any piece of data has impacted a particular generation.

Gretchen Tibbits: Hi, Gretchen Tibbits, DC Advisory. The man just asked the question. There’s been already some cases and some questions of the information from publishers, from creators, that have been used to train these models. Forget about generating music and the next, but that’s been trained and asking for percentages or rights or recognition of that. I’m wondering — and not asking you to comment on any active case — but philosophically, thoughts on that?

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

Kevin Scott: Bing’s quest to beat Google and the future of AI art: a thought exercise for all of you who have read a whale

Here is a thought exercise. How many of you have ever read a book? So, I am guessing that all of you who were in high school, or maybe a college, have had at least one book read to you. You could tell I am interested in a whale if I ask you. There is a leader. Maybe you remember him as Ahab. Maybe he has some sort of fixation issue that he is focused on. You could tell me a lot about the man. If you are a literature enthusiast, you might be able to recite a portion of the book from the beginning as you appear in it.

KS: And that’s the thing that will get sorted out. And I don’t know the answer to that question because it relies on judges and lawmakers, and we will sort of figure this out as a society. But the thing that the models are attempting to do isn’t… They are not a huge repository of this content. You’re attempting to build something that, like your brain, can remember conceptually some of these things about a thing that was present in the training. We will have to see.

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

What is the balance of trade? Why writing is not something that is valued or valued by anybody, but something that creates a connection with the world

So let me just back all the way up and say nobody wants to… As an author myself, I don’t want to see anyone disenfranchised. There are economic incentives for people to be able to produce content and earn money writing books. Not everybody can do the work of writing a well-researched piece of nonfiction. Someone who pours their heart and soul into writing. They need to make a living from it. And this is a new modality of what you’re doing with content. There are many big questions to answer about exactly what is happening and what is the appropriate way to compensate people for what has happened here.

What is the balance of trade like? Because hopefully, what we’re doing is building things that will create all sorts of amazing new ways for creative people to do what they’re best at, which is creating wonderful things that other people will consume that creates connection and enhances this thing that makes us human.

I think the thing that you want in general is, as a consumer of content, you don’t want to read a lot of useless stuff. I don’t believe anyone wants that. I would argue… This is an interesting thing you I think the purpose of making a piece of content is not flimsy transactional, and I haven’t talked about it. It is trying to put something meaningful out into the world, to communicate something that you are feeling or that you think is important to say and then trying to have some kind of connection with who’s consuming it.

KS: Yeah. With that particular thing, it was less about the AI and more about how the human piece of that was working. Honestly, that would’ve been a little bit better if there’d been more AI.

KS: No, I’m not blaming anyone. I think the diagnosis of that problem is some of these things on MSN — and I know this is true for other places — gets generated in really complicated ways. It wasn’t the case that a Columbia-trained journalist was sitting down and writing this at some point, and the machine tool that they used to do it was now faulty. That’s not what was going on here.

KS: Well, I think you all are going to judge the quality of the content. If it’s directed at you, you’re the ultimate arbiters of, “Is this good or bad? Is it true, or is it false?” One thing that these tools can help navigate is a world where a bunch of tools can make low quality content, and I will plant with you all a few seeds that could prove to be useful. And having your own personal editor-in-chief that’s helping you assemble what you think are high-quality, truthful, reliable sources of information and helping you sort of walk through this ocean of information and identify those things will be, I think, super useful. I think what you all are doing, by the way — and many of you in the room, I’m sure, are in media businesses — I think having all of this content out there makes your job more important.

KS: Way more important. Because somebody has to have someone that they trust, that has high editorial standards, and who are helping figure out signal and noise. It’s absolutely true.

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

What Is the use of GPT-4 in creating AI-generated content for a nonfiction writer’s compendium, and what is the problem with AI tools?

Correct. I agree with that. But the point that I was making is the useful thing about the tool is it helped keep me in flow state. So I’ve written a nonfiction book. I have never written a novel before. So the useful thing for it was not actually producing the content but, when I got stuck, helping me get unstuck, like if I had an ever-present writing partner or an editor who had infinite amounts of time to spend with me. It’s like, “Okay, I don’t know how to name this character. Let me describe what they’re about. Give me some names that are fun.

That’s the model today. The writers strike is resolving. They talked about the capabilities of the model in the past. There will be two GPTs, right?

I think you are almost certainly going to want to use some of these AI tools to help produce content. I was playing with this stuff for the first time and I wanted to write a science fiction book, and I have never been able to do it. And I started to attempt doing that with GPT-4, and it was terrible at using it in the way that you would expect. You can not just say that you have an outline for a science fiction book when you don’t know what you want to write. Please write chapter one.”

So, there’s nothing about an AI being 100 percent of that interaction that seems interesting to me. I don’t know why I would want to be consuming a bunch of AI-generated content versus things that you are producing.

Artificial intelligence generated content is sometimes good and sometimes not. I think it is not as interesting as it could be. There is a technical problem of whether or not you are swallowing things into your training process and causing the performance of a trained model to become worse over time. That is not a technical thing. I believe it is an entirely solvable problem.

We’ve got an increasingly good set of ways, at least on the model training side, to make sure that you’re not ingesting low-quality content, and you’re sort of recursively getting—

The flip side of that is you also make a lot of tools that can create AI content. You can see that these distribution platforms have a lot of artificial intelligence content. And something like a search engine or even training a new model being flooded with its own AI spam essentially leads to things like model collapse, leads to a drastic reduction in quality. How do you filter that stuff out?

That is an opportunity that we can have right now in regards to how these agents are going to show up in the world. It isn’t necessarily preserving exactly what that funnel looks like but being transparent about what the mechanics are so that if you are going to spend a lot of money or try to use it to acquire an audience, you at least know what’s going on. and you no longer know how to viably run your business.

Now, I think the compensation structure and how things work just evolves really rapidly. It feels like things are changing very fast right now, like how to find an audience for a product or service and how to turn audience engagement into a real business model. It is difficult because some of the funnels are hard to decode. You don’t really know what is going on in the site’s website ranking because it is directing traffic to your site.

I don’t think that is what anyone wants. It’s certainly not the thing that I want, individually. There needs to be a healthy economic engine where people are all participating. They’re creating stuff, and they’re getting compensated for what they create.

Source: Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art

What Do You Want to Know When You’re Going to Write a Review of a Phone, or What Should I Write About Your New Phone?

If an AI search product can just summarize for you what I wrote in a review of the new phone, why would I ever be incentivized to create another review of a phone if no one’s ever going to visit me directly?

Like you’re planning a vacation, you’re doing research on how to wring out the ethernet cables in a house you’re remodeling, whatever it is. Purchasing things or even reading a lengthy thing may be what you should do because you can’t get the information you need in a transaction with an agent. I don’t believe the extent to which the dynamic will change. I think the particular thing is everybody is worried about referrals, and how is this going to… What happens to referral traffic when the bot answers all of your questions?

Yeah. So I think what you want from a search engine and what you’re going to want from an agent is a little more complicated than just asking a question and getting an answer. A whole bunch of the time, what you want is you’re trying to accomplish a task, and asking questions are part of the task, but sometimes, it’s just the beginning. Sometimes, it’s in the middle.

I think the conventional wisdom is that [in] an AI-powered search experience, you ask the computer a question, it just tells you a smart answer, or it goes out and talks to other AI systems that sort of collect an answer for you. That’s the future. I think if you just broadly ask people, “What should search do?” “You ask a question, you get an answer.” That has a significant effect on how the web works. Search results show the fundamental incentive structure on the web. Did you think about that with Bing?

broadly. We have to be asking ourselves how everyone can participate, and what is fair. The goal at the end of the day is that. We’re creating big platforms, whether it’s search as a platform, or these cloud platforms, we’re making them right now. I think everybody is very reasonable in wanting to make sure that they can use these platforms in a fair way to do awesome work.

I think that the only thing that you can ask for is that your marketplaces are fair so you can compete, and that’s what I think. And I think it’s true for big companies, small companies, individuals who are trying to break through. Just whatever it is that is that notion of fairness is what everybody’s asking for. It’s hard to sort it out. I will not comment on what’s going on on the East Coast right now.

The context of this question is, as we sit here on the West Coast having this conversation, on the East Coast, Google is in the middle of an antitrust trial about how it might’ve unfairly created a monopoly in search. A big theme in the trial was, “Well hey, Microsoft exists.” They could compete if they wanted to. We’re just so good at this that they can’t.” Do you think Bing actually creates an edge in that race right now?

Yeah, for sure. It’s small market share gains, but definitely gains in ways that we hadn’t seen before. There are a lot of interesting things coming. We announced DALL-E 3 in Bing chat last week and are still taking feedback and trying to improve. It has been an interesting opportunity for us to do a bunch of experimentation with that team. A lot of things we learned on Bing have been applied to the Copilot products that are currently being built and the business that is growing fast right now.

I asked Kevin if he would write a song or book, if Artificial Intelligence were making custom content for other people. Well, it’s because Kevin thinks the AI is still “terrible” at it for now, as Kevin found out firsthand. He thinks that creating and using Artificial Intelligence will help people become more creative. Like I said, this conversation got deep — I really like talking to Kevin.

Kevin Scott is the Microsoft CTO and one of my favorites from the show, which I co-hosted last week. If you caught Kevin on Decoder a few months ago, you know that he and I love talking about technology together. I really appreciate that he thinks about the relationship between technology and culture as much as we do at The Verge, and it was great to add the energy from the live Code audience to that dynamic.

After the initial hype subsided, Kevin and I had a discussion about whether Bing was actually stealing users from the company in which it was invested.

Kevin also controls the whole graphics budget at Microsoft, which raises the issue of access to the H 100GPU, which is why so many of the best models run on it. Kevin knows that Microsoft has dependency on H 100s, and he wouldn’t confirm any rumors about it developing its own chips, but he did say that a switch to another vendor should be easy for Microsoft’s customers.