Uncategorized

Artists can use new tools to disrupt the systems of artificial intelligence

Nightshade, Pain and Furs: Bringing Artificial Intelligence into Artists’ Eyes: A Narrative of Dogs and Dogs

Zhao says he hopes Nightshade will be able to pollute future AI models to such a degree that AI companies will be forced to either revert to old versions of their platforms — or stop using artists’ works to create new ones.

“So it will, for example, take an image of a dog, alter it in subtle ways, so that it still looks like a dog to you and I — except to the AI, it now looks like a cat,” Zhao says.

AI models like DALL-E or Stable Diffusion usually identify images through the words used to describe them in the metadata. For instance, a picture of a dog pairs with the word “dog.” There is someone who says that.

Nightshade attempts to confuse the model of the training on what is actually in the image by adding a small poison pill inside an artwork.

On the show we talked a lot about whether artists and writers who use large artificial intelligence models have recourse when it comes to getting paid or credited, or even if they will be sued by the company that makes them.

Zhao’s team also recently launched Glaze, a tool which subtly changes the pixels in an artwork to make it hard for an AI model to mimic a specific artist’s style.

The Art of AI: War on AI in the Spawning.ai Legal Encounters and the Case of Sarah Anderson, a Cartoonist

Kudurru is a for-profit company by the name of Spawning.ai. The resource tracks scrapers’ internet addresses and blocks them if they send unwanted content, including theRickroll Internet prank which sends people to view a music video for Rick Astley.

Spawning co- founder Jordan Meyer wants artists to communicate with the bot and scrapers differently rather than giving them everything they need to know about their fans.

McKernan says they have been waging a war on AI since last year, when they discovered their name was being used as an AI prompt, and then that more than 50 of their paintings had been scraped for AI models from LAION-5B, a massive image dataset.

Totally. And it’s been sort of a cloud hanging over the entire AI industry. And this week, we actually got an update on how the legal battle is going. Some artists, including Sarah Anderson, a cartoonist, sued the company that makes Stable Diffusion, along with other companies, after they found out about the image generator from an interview we did on the show.

In the meantime, McKernan says the new digital tools help them feel like they’re doing something aggressive and immediate to safeguard their work in a world of slow-moving lawsuits and even slower-moving legislation.

And to me, the big takeaway from this — the thing that, if you know nothing else about this executive order, you should know, is that it basically signals to the AI industry from Washington, we are watching you. Right? This is not going to be another social media where you have a decade to build and spread your product all over the globe before we hold hearings and hold people accountable. We are actually going to be looking at this in the very early days of generative AI.

Social media should not be used against AI on social media platforms – the case of Kamath’s house, for example, in collaboration with the Hugging Face platform

My house is getting broken into and I’m going to use an ax, or something, to protect myself. The defensive opportunities afforded are said to be a result of the new tools.

“These types of defences seem to be very effective right now in regards to many things,” says George Kamath, an researcher at the University of Waterloo. There is no guarantee that they will still work in 10 years. Heck, even a week from now, we don’t know for sure.”

heated debates questioning how effective these tools really are are on social media platforms. The conversations sometimes involve the creators of the tools.

“This is not about writing a fun little tool that can exist in some isolated world where some people care, some people don’t, and the consequences are small and we can move on,” says the University of Chicago’s Zhao. “This involves real people, their livelihoods, and this actually matters. We will keep going as long as it takes.

But Yacine Jernite, who leads the machine learning and society team at the AI developer platform Hugging Face, says that even if these tools work really well, that wouldn’t be such a bad thing.

Jernite supports the idea of broadly available data for research. Artists’ wishes should be respected by companies that use artificial intelligence.

“Any tool that is going to allow artists to express their consent very much fits with our approach of trying to get as many perspectives into what makes a training data set,” he says.

Jernite says several artists whose work was used to train AI models shared on the Hugging Face platform have spoken out against the practice and, in some cases, asked that the models be removed. The developers do not need to comply.

Still, many artists, including McKernan, don’t trust AI companies’ opt out programs. “They don’t all offer them,” the artist says. “That’s why they don’t make the process easy.”

What Happens Today? How to Find the Green Shoots of Artificial Intelligence in a Way that Makes You Less Nervous?

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

But do I want the government saying, oh, if you’re going to train the largest language model yet, we’d like you to tell us? I lean on the side of letting someone know. Someone pays attention to this. So that’s kind of where I am. Where are you?

So I don’t want the government to get so far out ahead of things that it is prevented from doing all the things that Ben Buchanan just talked about, like helping to address climate change, for example, using the power of AI. That could be a good thing if the government could do that. I don’t think we need to slam on the brakes so hard that there’s no chance of that.

That’s kind of the first part of it. The second part of it is the sort of mythical-future GPT 5 and all the equivalents where all the other companies — we just don’t know how good they’re going to be. Like, what we know is that there have been massive leaps in each successive version of these models.

What does the next massive leap look like? We’re not very good at thinking of exponential change. Our brains think linearly. It is like my brain is not good about understanding what the change will mean if we are one step away from it.

I believe society has sort of adjusted to this, and I think that it doesn’t really affect my life that much. It is difficult to know what the future looks like in the moment. And so I try to just keep my eyes focused on, well, what happened today?

Well, let me take the first question first. Is something changing that made me less nervous? I kind of go back and forth on this. It depends on the day. Sometimes, when I use GPT 4, it does something so amazing that it makes me think the future is going to look very different. What do we do now?

I am hearing you talk about how to find the green shoots of artificial intelligence. Is your view ofArtificial Intelligence changed in a way that makes you less nervous? And do you actually think that more regulation is needed?

I think the piece caught the attention of the industry. The executive order addresses all of the harms that artificial intelligence could potentially bring, such as discrimination, bias, fraud, and disinformation. There are some specific requirements in it that government agencies are supposed to figure out how to prevent AI from encouraging bias or discrimination in, for example, the criminal justice system or whether AI can be used for processing federal benefits applications in a way that’s fair to people.

They still have a lot to do. Again, the policy reads very sweeping. What it means in practice, I think we’ll have to see how it plays out. There are some good ideas here.

Yes. And I would say that honestly, this was a pleasant surprise. Right? Like, I write about technology policy and proposed regulations a lot, and I don’t like a lot of what I see. When Biden was campaigning for the presidency, he suggested that we remove Section 236 of the Communications Decency Act because it would mean that every technology platform could be responsible for a person’s post on its platform. But like, to me, that was the worst kind of tech policy, because you’re painting with the broadest possible brush, you’re ignoring any positive use cases, and you’re just sort of legislating with a giant hammer. This is not that approach. These are people who have done the homework, who have been very thoughtful.

If you make a new pharmaceutical drug and you don’t tell the government what you are doing, you’re like the company in other industries, which would be mad. It has to be approved. This is the same thing as that.

At a certain point the government was going to have to step in. Now, that arrived, I think, sooner than I would have thought, right? The government is slow andsclerotic.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Open-Source Development in the White House: What the President Biden’s Advisors told us during a visit to Silicon Valley about the Executive Order

It’s just the story of the technology over and over again, right? Bright side and dark side. And you just have to understand it and deal with it as it is. And the open-source issue is one that we’ll definitely continue to work on and hear from people in the community about and figure out the path ahead.

Some of them are. Some VCs are telling me that if you think the technology makes making a bioweapon any easier, then you are a fool. This is what people are talking —

[LAUGHS]:: A challenge with having really good safety discussions about this stuff is that I personally just do not try to use these tools for evil, you know? I’m with you, and it’s difficult to know what the case is here.

So OK, this is the debate that’s happening in Silicon Valley about the executive order. In the course of your visit to the White House, you talked to some of the President Biden’s advisers about this. What did they say?

Arati is the director of the Office of Science and Technology Policy. And I just said, does the government have a stance on whether it wants to see more open-source development or more closed development? Here is what she told me.

If I were still in venture capital, I would say the technology is democratizing. If I were still in the Defense Department, I would say it’s proliferating. And they’re both true.

If there was any green shoots that came out, I asked Ben Buchanan, who is an advisor to President Biden, about if there would be any benefit to us from using Artificial Intelligence. Here’s what he told me about that.

I think it’s even more than green shoots. If we did not think there was substantial upside then we wouldn’t be trying so hard to calibrate the policy. Reducing waste in the electricity grid and the like can be achieved if something like microclimate forecasting is used. There is a lot of potential here, and we want to take full advantage of it.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Regulatory Capture – What Happened When Open Source Software is Going to the White House – Or Did The Government Go Its Own?

And to be honest, that is just an issue where I am trying to learn and listen and read and talk to people. But I’m curious if you have a gut instinct on that.

I mean, Dario Amodei, Sam Altman — these are not people who became worried about AI recently, just as soon as they had big companies to protect and products to sell. They are people who, I think, are genuinely worried that AI could go wrong and are trying to put in place some common-sense things to prevent that. There is a very cynical argument that the people speaking about the risks of artificial intelligence are just doing it to enrich themselves.

They went to the government. They freaked them out. They said, regulate us now, and oh, by the way, here’s exactly how to do it. And now, they are starting to get what they want. And the result is going to be that they are the winners who take all, and everyone else is left by the wayside.

Regulatory capture is when an industry sets out to ensure that to the extent any regulations are passed, it gets those regulations passed on its own terms. It sorts of pulls the ladder up so that incumbents will always be in control and challengers will not be able to compete.

I agree with you but want to try Steelman’s other arguments, right? Here is what I am hearing from people in the open-source community. They think that we are nearing the beginning of regulatory capture.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Do we need to report back to the government if we’re training large models? Anthropic, open AI, and the White House: What do we want to see in the future?

So it’s just saying, you have to tell the government, and you have to actually tell them that you’re doing safety testing, and sort of, if you’ve found anything dangerous that these models can do. So I would say the people who are objecting to this are not objecting to anything specific that applies to models currently existing.

I would include OpenAI and Anthropic. They say that we see a lot of possibilities for harm here. And so instead of just putting it up on GitHub and letting anybody download it and go nuts, we are going to build it ourselves. We’re going to do a bunch of rigorous testing. We’ll tell you about the test, but we are not going to let everyone play with it.

So briefly, open-source technology can be analyzed, examined. You can look at the code. You can usually fork it, change it to do your bidding. The people who love it say that it’s the safest way to do it.

If you get a lot of eyes on this, you are going to eventually build safer, better tech, and you will be able to make more money, because we will all be better off. Right? And then, you have the people who are taking a closed approach.

And this debate has been swirling in Silicon Valley for months now, but it really seems to have come to a head over this issue of having to report to the government if you are training a model larger than a certain size. Let’s talk about that. I don’t get the backlash to this.

It’s not telling AI developers, you can’t make a very large model, you’re not allowed to. It is not saying you can’t make a large open source model. If you are building a model that is larger than 10 to the 26th- power FLOPS, you know what I mean.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Hard to the 26th-power Floating Points, or How Hard Should They Be? Is It Hard To The 26th Power FLOPS?

It’s so fun to say “FLOPS.” I am going to say that if one of my friends have a huge failure, it should be 10-to-the-26th-power FLOPS. I’m saying, you FLOPSed so hard, you’re going to have to tell the federal government, bitch.

When these requirements kicks in, there is a threshold that can be reached if the model has been trained with more computing power than 10 to the 26th power floating point operations. I looked at it. That is 100 septillion FLOPS.

Totally. I think the industry was surprised by this. The people I talked to at AI companies — they did not know that this exact thing was coming. They were unsure of where these rules would kick in, and what the threshold would be. Would they apply to everything, big or small?

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

How Do We Get The Dots? A Casey Goes to the White House Plus the Copyright Battle Over Artificial Intelligence + HatGPT

They are made out of recycled plastic and I don’t know why they are feeding them to children. Have you ever tasted one of those things? Good night.

Yes, literally. It was the only thing remaining at Target. I am testing the candy after we bring home the Dots. A tooth comes out when I bite into a dot.

You know, I could recommend, actually, a lot of good costumes for that — Phantom of the Opera comes to mind. A mask that covers at least part of your face is really something.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Why are Dots so sticky? Why do we need to hurry up? A case study: Cassiey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

You know what’s so funny about this is that every year, there is a panic around Halloween candy. You would better open up every single wrapper and make sure that nobody stuck a razor blade in that. And we always laugh. We say, oh, you people need to calm down. It was necessary for you to go to get emergency dental work.

Yes. Yes. It was very bad. And these Dots — they’re too sticky. I call on the Biden administration to outlaw the Dots because we need to do something.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

What I Know About the White House: The President’s Dog and the Multiverse of Madness, or What I Wanna Be Saying About the Presidential Order, and How I Was Bringing It On

Because here are the things — I went to the White House once when I was a child, part of a school tour. Very exciting. Remember very little of it. But here are the things I know about the White House. I am aware that the president lives there.

I know there is an office called the West Wing. The dog at the White House until recently was named Commander and he bit people.

[LAUGHS]: I was — let me tell you. From the moment I walked onto the grounds, my head was on a swivel. I want to know where that dog is. Because I wanted to meet him and pet him. If I had been bitten by the President’s dog, what would the best thing for the Podcast be?

[LAUGHS]: No. It is funny. You mentioned that you enjoyed treats. Because we went on the Monday before Halloween, so Monday of this week. I went down with our producer. We took in the sights and sounds. There are children dressed up in costumes on the grounds of the White House.

I see a lot of Barbies, but not a dog, even though I do not see a dog. The offices of staffers in the executive office building have been transformed into Hollywood intellectual property. There was a Barbie room. There was a Harry Potter room.

The hosts in the White House digital office had transformed their office into something called the Multiverse of Madness. And when you took a left, you were standing in Bikini Bottom from the SpongeBob Squarepants Universe. There were bubbles blowing everywhere.

And I’m setting this scene, because you have to understand, I am there to listen to the President talk about the most serious thing in the world. And while we were interviewing his officials about the executive order, we’re literally hearing children screaming about candy. So it was an absolute fever dream of a day at the White House.

There was a signing ceremony that the President did where he put this executive order into place.

That is correct. Yeah. So after we had some interviews at the executive office building, we walked over to the East Room of the White House, which was very full of people from industry, people who work on advocacy around these issues. The President and the Vice President came out. Chuck Schumer, the Senate majority leader, was there.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Getting rid of photoshop if you’re going to sell it, or go after it like a store and make a fool out of it

Yeah. We could go anywhere in the world. I think the part of the order that has gotten the most attention is the aspect that attempts to regulate the creation of next-generation models. So the stuff that we’re using every day — the Bards, the GPT 4s — those are mostly left out of this order.

The President has established a rubric that will apply if there is a GPT 5 or Claude 3. And when it does, it will then have some new requirements, starting with, they will have to inform the federal government that they have trained such a model, and they will have to also disclose what safety tests they have done on it to understand what capabilities it has. So I mean, to me, that is the big screaming bullet that came out — is like, OK, we actually are going to at least put some disclosure requirements around the Bay Area.

[LAUGHS]:: Now, I’m not a lawyer, but I feel like I have a pretty good grasp of one of the issues that is at stake here, which is, who does the liability fall on? If I use photoshop and make a fake picture of money. I try to use it as a store because it isn’t on Adobe. I have to make that decision.

Correct, or fake money or something like that. That is less protected than using software to draw Disney characters and try to make fake money. Because it can do other things, the courts aren’t likely to see it as an inscrutable act. Is that what you mean?

So I see why you say that’s strange, but in fact, it’s exactly how you would make a general-purpose tool. To me, a program that will draw Disney Characters is more useful than a program that will make you do lots of different things.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

How Do We Get Our Data? The Case of Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

So in some sense, the bigger your model is, the more data it was trained on, the more potentially protected you are from some of these claims. If you want to win lawsuits brought by individual creators or publishers, the incentive is to make your model big enough so that you can collect as much data as you can. They can’t come back and say that it looks a lot like what I made.

The court said they were going to jury on that. And the reason is, Westlaw owns the set on which things are trained. My point is that these licensing deals are not going to help individual authors. The people who wrote summaries for Westlaw do not get any more money if Westlaw succeeds.

There are situations where, for example, if you just train entirely on one artist, that might well be different. That is a design choice. There is a case proceeding going on that requires them to write their own summaries of court decisions.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

The problem is capitalism: a line that Doctorow discusses in his book Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

So you could sort of randomly attribute, I suppose, or you could pass it through the fraction of the time that it looks close to a particular image. And I would just say, are you going to be able to go to Starbucks on that money? I wouldn’t place too many bets.

Here’s the thing. I’m very skeptical of these models. Because again, if they’re done by the big publishers, they are not in the business of actually delivering most of the money to the authors or the artists. A lot of the time, the image won’t look like anything in the data set.

Can I just say that this is a line that Doctorow has in his book, the problem is capitalism? That is, giving individual artists more copyright rights is like giving your kid more lunch money when the bullies take it at lunch. The bullies are going to take all the money you give, right?

Even though it isn’t based in any legal requirements, I think a development like that is really powerful. There are some things you can do to get paid. Only publishers with big piles of works can hope to get paid, it’s the classic thing about this. It is not worth it to license on an individual basis.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

The Rise of Voluntary Opt-outs: The Case for OpenAI at the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

This is the rise of voluntary opt-outs, that I would say. And that’s very similar to what developed with Google. So Google respects what are called robot exclusion headers. They won’t do it because it’s fair use for many purposes.

Look, people will say that you have to license everything. The law has never been receptive to that argument. litigation is expensive. So what courts and other fair use cases have said is, just because you were willing to negotiate to avoid a really expensive lawsuit doesn’t mean that it isn’t fair use.

I was really struck a few weeks back, when OpenAI licensed some old articles from the “Associated Press.” Many of these articles are already online and could be used to train future models for free by OpenAI. If they tell you that you ought to license the data as a lawyer, are you aware that you should be paying for all of it? Or are the laws robust enough that it can do that as a goodwill gesture without incurring any more liability?

This is still early days and there are things that need to be fixed. It’s the part of the claim that requires a fair use analysis, not the other parts of the claim that were about the outputs. And so I would say nobody should really rest on their laurels right now.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Can you afford a lawsuit against the copyright? Can you really argue? The simple truth is that a copyright violation is against the law

The classic thing is the possibility of a lawsuit over this. Well, it’s America. You can always file a lawsuit. Right? Can you win? That’s a very different question. And can you afford to litigate? There is a completely different question.

[LAUGHS]: Totally. The allegations were dismissed because the artist’s works weren’t registered with the Copyright Office. But there was one claim that the judge did let stand, which is this direct infringement claim against Stability AI.

Right. And then, we’ve expanded it as well to cover the idea of derivative works, which is a contested category, but the basic idea is, if you’re the author of a book, you should have the right to make a movie or a translation of the book — that that’s your right.

Right. Copyright, at least when it was first conceived, is about literal, identical copies of something that you do not own, that you are directly profiting from.

We have been talking about what is not a violation of the Copyright Act. It might help me just to remind myself, what is a copyright violation? Like, give me some cut-and-dried cases of, oh, yeah, that’s against the law.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Is the Output Exactly Infringing? Absolutely Not, but Is It Really Impossible to Ask the Right Questions?

I think that it doesn’t matter in the traditional fair use analysis. If your output is non-infringing, it is a strong case for fair use.

It’s a little perplexing. I am also not a programmer, but it does sound fairly consistent when you talk to them, that no, there aren’t pictures in the model. There is a lot of data. There are rare occurrences when there are 500 versions of Starry Night in the data set, but for the average image it can’t be gotten.

Now, the companies and people who work in AI research have said, like, this is not actually how these models work. This is the argument that the artists are making. What do you make of that argument?

But we have a robust system for attributing responsibility to the person who tried really hard to find the infringing copy on Google. So there are definitely some principles of safe design. But the fact that they aren’t perfect really shouldn’t be the end of the question, sort of, who’s responsible for it. I thought that was on you, because I worked really hard and I was able to come up with something that looked like Sarah Anderson’s cartoons after a 1,500-word prompt.

So there’s lots of circumstances where, for example, people can use Google and say, I want to watch “Barbie.” It isn’t impossible to watch “Barbie” without permission on the internet, thanks to the ingenuity of the search engine.

And so part of the answer is, well, is the output actually infringing? Right? So if it’s not, then no. And if it is, then actually, I want to start asking questions. Who is responsible for it?

I think the question for me is, is that truly analogous to a situation where I’m a very popular artist, people love to type my name into Stable Diffusion, you get images that look like my life’s work, and I get $0 for that?

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Google is an Index, right? Intellectual Property of the Web as an Indices of the Google Page and the Penguin Book Project Against the Copyright Battle

It is making copies of the pages. It is caching those pages, so that it can serve them up faster. Intellectual property of one type or another is what that is. The intellectual property that you entered into the search engine is spit out a result which takes advantage of that intellectual property without reproducing it exactly.

So I think this is an interesting analogy to think about for a minute. You are saying that when you think of how muchgoogle does on the web, you think of it as an index. Right? It looks at every single page.

The idea of doing something new with existing works is fairly well established. The question is, of course, whether we think that there’s something uniquely different about LLMs that justifies treating them differently. That is where I end.

There are things that you can do that are not fair. Right? But Google, for example, with the book project, doesn’t give you the full text and is very careful about not giving you the full text. And the court said that the snippet production, which helps people figure out what the book is about but doesn’t substitute for the book, is a fair use.

Again, my view is that we have a set of tools for dealing with this. You can disagree with them. The rise of the internet looms large over everything.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

What do you think the prompt alone should be enough to get a copyright? I had to ask the guy at Disneyland to tell me that I wasn’t a lawyer

And I’m curious what you make of that argument. Because that’s something that I’ve heard from artists, from writers who are mad that their books were used to train AI language models. What are the implications for the model’s intellectual property?

If it wasn’t within your contemplation, like, there’s room for accident and serendipity in human creation. But there’s also a point at which the serendipity is no longer yours.

The prompt didn’t specify enough to connect them as human creations to the output, but is that because they look different? I have a second question about this point of view: what if you reject the ones you don’t like, do you think the prompt should be enough to get copyright? You are like, no, that is not what I wanted. Are they still yours?

So I guess what I would say is I’m still mostly of the opinion that the prompt alone shouldn’t count, although you can find people who disagree. But here’s my pitch, which is you often get a choice of multiple outputs that look quite different from each other. And so I have a couple of questions.

But these days, people are writing these meticulous prompts. It’s a banana that is dressed like a detective in a 1940s noir movie, but he’s at Disneyland, right? The output felt like it had a bit more human authorship in it to me. I am not a lawyer. Like, in your view, is that all sort of the same thing?

So at the risk of derailing, I am just super fascinated by this question. So I can see your point of view. If I type the word Banana into DALL-E and it pops up a banana, I could see why the argument is that I shouldn’t have a copyright.

And if you’re giving a copyright in a selfie, is that the same thing as giving a copyright in the footage from a security camera that’s running 24/7? And you know, although you sometimes do have to draw lines, that’s not unknown to the law, and we can just decide what our rules are going to be without really disrupting anything, in part because most of the time, it doesn’t come up whether a human is enough involved.

I thought that copyright had the knowledge to handle these questions in a more conventional way. On the other hand, if people decide that we need something new, we’ve changed copyright laws before. So it’s quite possible that we could fruitfully get a new law. But right now, we do have established principles. I don’t believe that they break when confronted with an artificial intelligence.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Comment on Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT” by Rebecca Tushnet

So that is totally shocking to me, right? When we talked about this on the show, it seemed to me that it was new. But what about it struck you as conventional?

Yes, I chuckled. Rebecca Tushnet was the one we decided to bring in. She is a professor at Harvard Law School. She is the foremost authority on First Amendment, intellectual property and copyright law. I also read, according to her bio, that she is an expert on the law of engagement rings —

Yes. On one hand, that is true. But on the other, the core claim, the one that you mentioned at the top of this segment, is allowed to go forward. We’re going to see whether the artists have been treated fairly in a way that will allow them to get some money.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Why do I wear a tie? The case for Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

This feels like an important question in the field of artificial intelligence. We’re using these tools. We think that I helped make this thing without my consent. Uh, where’s my cut?

I will wear a tie next time. (LAUGHING) Actually, I have to say, our producer, in what was a transparent effort to get me in trouble, asked one of our minders at the White House, don’t most people wear a tie here? The man looked very uncomfortable, I figured he wanted to not humiliate me, but he was pretty much everyone wears a tie.

I mean, look, here’s the thing. Not to stand for the federal government, but when it wants to, the government can be pretty frickin’ majestic. There is a lot of mythology about American history and democracy as a kid. It’s like, OK, now, you’re in the room, seeing it happen. I plan to say yes at risk of sounding cringe but I did like my trip to the White House and watching democracy in action.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Hard fork executive order ai copyright: the smart thing about bioweapons and the government’s response to the White House

It does similar stuff around the possibility of bioweapons. So I do think the smart thing here is, they’re trying to identify, well, what sort of seems like it might be easy to do with a much more powerful version of this thing and start to develop some mitigations today?

It is not a bad problem today, but it may be in a few years. The government is catching up with that. They may be able to develop authenticity standards so that when the stuff gets more serious, we are prepared.

I believe it’s true. But there is still reason to hope, I think, in this executive order. It says that using the Department of Commerce to develop content authenticity standards for the very meaningful reason of trying to make sure that citizens know that the government actually communicated to them, is one of the ways to accomplish this. That’s kind of an existential problem for the government.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

The Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT: The First 100 Years of Regulation in the U.S.

If we know one thing about the history of regulation, in this country at least, it is that often, the biggest regulations are passed in the wake of truly horrendous damage. Right? It took the financial markets collapsing in 2008 for Dodd-Frank to be passed to regulate the banking system. A lot of our labor laws and labor protections came after things like the Triangle Shirtwaist Factory fire, when people died because there were not adequate safety protections at their workplace.

Our audience editor is Nell Gallogly. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. You can email us at hardfork.

Rachel Cohn and Davis Land are involved in the production of Hard Fork. We had help this week from Emily Lang. Jen Poyant is the editor of us. The episode was fact-checked. Today was a show that was engineered, composed and performed by ElISHeba Ittoop, Dan Powell, and Sophia Lanman.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Time for Hat GPT: Where Are Waymo? Where are the barf bags? When are the barsf bags coming into existence?

Yeah, but the ride was very smooth, and so I was confused for a minute. I was wondering if I should be bracing for turbulence. Should I be wearing tighter clothes? What is going on here? Well, all right. It is time for Hat GPT.

I mean, this had become an annual tradition of the citizens of this fair city. And now, well, if you can’t find a Waymo, you’re out of luck. It is indeed. I noticed something new in the Waymo this week, which was that they now come with barf bags.

I don’t think it’s for turbulence. I think it was for people who had been drinking. I believe that it was a special Halloween event. It must be a story behind this. If you have — because if you vomit in an Uber, the driver has to clean it up, and they can charge you a cleaning fee.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Why does Cruise have to stop all driverless taxi operations in the United States? A case of Cruise going to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

I think in general, regulators are just very on high alert for anything dangers involving self-driving cars. It’s a big blow to Cruise, as it has had a hard time convincing people that its rides are safe. There have been incidents of traffic jams caused by Cruise vehicles.

The rebels have won. Like, this was the future liberals want. And we’re now left without these cars. This particular accident is controversial because of it. The victim was hit first by another car.

Cruise stops all driverless taxi operations in the United States.” This is from “The New York Times.” The company said last week that it would put all of its cars out of commission in the US, two days after California regulators told the company to stop operating in the state.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Do we really care about the use of generative AI in news? A case study of Casey Goes to the White House + the copyright battle over artificial intelligence + HatGPT

All right, we want to ask our Listeners about it. Who do you think is to blame? Is it the humans or the artificial intelligence? The poll with the help of artificial intelligence will be underneath the article.

No, no, no, no, no, no, no, no. Do not let the humans off the hook for this. Because someone at Microsoft decided, you know what would boost our engagement on these news articles? Some polls are generated by Artificial Intelligence. The polls ran, but it is not the fault of the Artificial Intelligence. It is the Microsoft person who decided to implement these polls, and we should not let them off the hook for that.

It’s so dystopian. Oh, my god. Imagine you live a dignified life. You do some things. Your obituary gets written up in a major newspaper. They attach a poll to it, generated by artificial intelligence. Was Casey a good person? In the comments, do you sound off?

I have this theory that the use of generative AI in news — it just — it always trends toward crap. You know what I mean? Like, you have this idea and you think, oh, this is so cheap, and it’s so futuristic. And let’s put it into practice, and we’ll show innovative we are. And in practice, it always just trends toward crap. So this is —

We now know that they are big on artificial intelligence. Maybe they’re putting things in around the stories that they aggregate. Do not do it for stories about people dying. That should be like a very easy no.

Oh, god, I can not believe it. This sucks. Like, I sort of vaguely have a sense of how this could have happened, right? Like, Microsoft runs, like, msn.com Maybe some more news aggregations. It has stories from all over the place.

Next to this article, it put it. And the poll asked, what do you think is the reason behind the woman’s death? Readers were then asked to choose from three options — murder, accident, or suicide.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

The Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT: Where are we going? How do we go about it? How Do we protect ourselves?

Yeah. Here is what I am going to say. I hope the next movie in the series is going to show how Congress came up with a law and inspire a lot of them to do anything. It would be great for this country.

And he stopped and was basically like, forget your family. It can fool you. He’s like — he’s like, he says, I look at these things, and I think, when the hell did I say that? That is a direct quote.

There is an argument that these companies make, that we just make the tools. Users’ use of them can either be illegal or not. But either way, we are shielded. Is that a valid argument?

In general, yes. And so some of my questions are about the tweaked models that create infringing material or people are making, say, to generate porn. But in general, they are taking the models, and then tweaking them themselves to do that. That is on them.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT: ‘I Feel the need for you to look into me’

For a long time in our society, the artists and writers have been living on easy street. But now, finally, along come these new technologies to take them down a peg, and they’re actually going to have to work for a living. I am so sorry to all of the writers and artists.

You can’t solve a problem of economic structure by handing out rights to somebody who doesn’t actually have market power to exercise. Because the publisher is still going to say, well, if you want to publish with me, you’ve got to give me all the rights. I think we need to discuss how we pay artists instead of thinking we can fix it with artificial intelligence, since you will say I would like to be in print.

It’s right. Well, fascinating. If the courts push these companies out of business because of fair use, I hope we can have you back. But

The old Facebook offices had a big jar of them. And so whenever I would go down there, on the way in and out, I was always, like, grabbing a couple of Peppermint Patties.

Is it possible they had a secret dossier on you, like, that the player in “Platformer” loved peppermint patties? Let’s get a big bowl out so he’ll be more favorable to us.

No, those places buy so many candies and so many foods. They don’t need to bother having a dossier. You walk in. They’re like, oh, what’s your favorite food? Lobster bisque? Yeah, we have that.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Why the hat GPT is so popular that it’s not a budget hat and instead an artificial intelligence project that grows and evolves over time

There is a hat GPT. We got some comments saying this looked like a budget hat that was not professionally designed. I would like to say that you are correct. I made this thing in about five minutes. I believe I paid $22 for it. If someone wants to make us a better Hat GPT hat, we’re open.

Absolutely. The show is a success because the hat will become more elaborate and ornate over time.

Hat GPT, of course, is the game where we draw news stories about technology out of a hat, and we generate plausible-sounding language about them until one of us gets sick of the other one talking and says, stop generating.

One of my favorite projects is an artificial intelligence project. The main character from the show has been walking straight into a closed refrigerator for the last five days. The song is stuck on a short repeating loop for a long time. It is more popular than it has been in a while.

There is something beautiful about a show that was famous for nothing, being recreated as an artificially intelligent project that, over time, evolved into almost literally nothing, and then became more popular when it did.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

A Floating AI-Computing Platform on a Barge and the Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

“All right, Kevin”, stated the warning from the sledgehammer. This next story is a tweet from something called Dell Complex, which describes something called the “Blue sea frontier compute cluster,” which is a barge. Are you familiar with a barge-based compute platform?

‘LAUGHS:’ I watched this go around on social media. They call it an augmented reality corporation. I think it’s an art project, but it’s basically a bit these people are doing, saying, we are so mad about the Biden administration’s draconian executive order mandating that big AI developers report their models to the government that we are going to build, essentially, a floating AI-computing cluster on a barge in international waters, so that we’re not subject to any regulations.

The proposed retail stores on floating barges that would travel from port to port were a project that the search giant was considering in the early-2010s.

[LAUGHS]: OK. It would be similar to old-timey movies where people wave at the ships as they come in, but it would be a giant Google store that pulled up with new phones.

Source: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT

Biden grew more worried about AI after watching ‘Mission Impossible: Dead Reckoning,’ says White House Deputy Bruce Reed

(LAUGHING) Stop generating. All right. This one says, “Joe Biden grew more worried about AI after watching ‘Mission Impossible: Dead Reckoning,’ says White House Deputy.” This is from “Variety.”

According to Bruce Reed, the deputy White House chief of staff, Joe Biden became enraged after seeing fake Artificial Intelligence images of himself and learning about the terrifying technology of voice cloning.

You know, it didn’t, although he talked — he appeared to deviate from the script when he was giving his remarks. It could fool your family, because it was supposed to say something with just a few seconds of your voice.