ankit219 19 hours ago

Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.

One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].

> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.

[1] https://www.lw.com/en/insights/2024/11/european-commission-r...

  • m3sta 11 hours ago

    The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.

    • Oras 5 hours ago

      Is that true? How can they decide to wipe out the intellectual property for an individual or entity? It’s not theirs to give it away.

      • elsjaako 5 hours ago

        Copyright is not a god given right. It's an economic incentive created by government to make desired behavior (writing an publishing books) profitable.

        • kriops 4 hours ago

          Yes it is. In every sense of the phrase, except the literal.

          • Zafira 4 hours ago

            A lot of cultures have not historically considered artists’ rights to be a thing and have had it essentially imposed on them as a requirement to participate in global trade.

            • kolinko 4 hours ago

              Even in Europe copyright was protected only for the last 250 years, and over the last 100 years it’s been constantly updated to take into consideration new technologies.

              • pyman an hour ago

                The only real mistake the EU made was not regulating Facebook when it mattered. That site caused pain and damage to entire generations. Now it's too late. All they can do is try to stop Meta and the rest of the lunatics from stealing every book, song and photo ever created, just to train models that could leave half the population without a job.

                Meta, OpenAI, Nvidia, Microsoft and Google don't care about people. They care about control: controlling influence, knowledge and universal income. That's the endgame.

                Just like in the US, the EU has brilliant people working on regulations. The difference is, they're not always working for the same interests.

                The world is asking for US big tech companies to be regulated more now than ever.

      • arccy 5 hours ago

        "intellectual property" only exists because society collectively allows it to. it's not some inviolable law of nature. society (or the government that represents them) can revoke it or give it away.

        • impossiblefork 4 hours ago

          Yes, but that's also true of all other things that society enforces-- basically the ownership of anything you can't carry with you.

          • CaptainFever 3 hours ago

            Yes, that is why (most?) anarchists consider property that one is not occupying and using to be fiction, held up by the state. I believe this includes intellectual property as well.

        • figassis 3 hours ago

          You're alive because society collective allows you to.

          • lioeters 2 hours ago

            A person being alive is not at all similar to the concept of intellectual property existing. The former is a natural phenomenon, the latter is a social construct.

  • t0mas88 16 hours ago

    Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"

    This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.

    • sealeck 2 hours ago

      > This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.

      I think you've got civil and common law the wrong way round :). US judges have _much_ more power to interpret law!

      • saubeidl an hour ago

        It is European law, as in EU law, not law from a European state. In EU matters, the teleogocial interpretation, i.e. intent applies:

        > When interpreting EU law, the CJEU pays particular attention to the aim and purpose of EU law (teleological interpretation), rather than focusing exclusively on the wording of the provisions (linguistic interpretation).

        > This is explained by numerous factors, in particular the open-ended and policy-oriented rules of the EU Treaties, as well as by EU legal multilingualism.

        > Under the latter principle, all EU law is equally authentic in all language versions. Hence, the Court cannot rely on the wording of a single version, as a national court can, in order to give an interpretation of the legal provision under consideration. Therefore, in order to decode the meaning of a legal rule, the Court analyses it especially in the light of its purpose (teleological interpretation) as well as its context (systemic interpretation).

        https://www.europarl.europa.eu/RegData/etudes/BRIE/2017/5993...

        • chimeracoder 42 minutes ago

          > It is European law, as in EU law, not law from a European state. In EU matters, the teleogocial interpretation, i.e. intent applies

          I'm not sure why you and GP are trying to use this point to draw a contrast to the US? That very much is a feature in US law as well.

    • gkbrk 4 hours ago

      > Even for open source models, you can add a license term that requires users of the open source model to take appropriate measures to avoid [...]

      You just made the model not open source

      • badsectoracula 3 hours ago

        Instead of a license term you can put that in your documentation - in fact that is exactly what the code of practice mentions (see my other comment) for open source models.

      • h4ck_th3_pl4n3t 28 minutes ago

        An open source cocaine production machine is still an illegal cocaine production machine. The fact that it's open source doesn't matter.

        You seem to not have understood that different forms of appliances need to comply with different forms of law. And you being able to call it open source or not doesn't change anything about its legal aspects.

        And every law written is a compromise between two opposing parties.

      • LadyCailin 3 hours ago

        “Source available” then?

    • whatevaa 2 hours ago

      There is no way to enforce that license. Free software doesn't have funds for such lawsuits.

    • deanc 6 hours ago

      Except that it’s seemingly impossible to prevent against prompt injection. The cat is out the bag. Much like a lot of other legislation (eg cookie law, being responsible for user generated content when you have millions of it posted per day) it’s entirely impractical albeit well-meaning.

      • lcnielsen 6 hours ago

        I don't think the cookie law is that impractical? It's easy to comply with by just not storing non-essential user information. It would have been completely nondisruptive if platforms agreed to respect users' defaults via browser settings, and then converged on a common config interface.

        It was made impractical by ad platforms and others who decided to use dark patterns, FUD and malicious compliance to deceive users into agreeing to be tracked.

        • jonathanlydall 5 hours ago

          I recently received an email[0] from a UK entity with an enormous wall of text talking about processing of personal information, my rights and how there is a “Contact Card” of my details on their website.

          But with a little bit of reading, one could ultimately summarise the enormous wall of text simply as: “We’ve added your email address to a marketing list, click here to opt out.”

          The huge wall of text email was designed to confuse and obfuscate as much as possible with them still being able to claim they weren’t breaking protection of personal information laws.

          [0]: https://imgur.com/a/aN4wiVp

          • tester756 4 hours ago

            >The huge wall of text email was designed to confuse and obfuscate as much as possible with

            It is pretty clear

            • johnisgood 4 hours ago

              Only if you read it. Most people do not read it, same with ToSes.

              • octopoc 3 hours ago

                If you ask someone if they killed your dog and they respond with a wall of text, then you’re immediately suspicious. You don’t even have to read it all.

                The same is true of privacy policies. I’ve seen some companies have very short policies I could read in less than 30s, those companies are not suspicious.

                • 1718627440 an hour ago

                  That's true, because of the EU privacy regulation, because they make companies write a wall of text before doing smth. suspicious.

        • mgraczyk 5 hours ago

          Even EU government websites have horrible intrusive cookie banners. You can't blame ad companies, there are no ads on most sites

          • lcnielsen 4 hours ago

            Because they track usage stats for site development purposes, and there was no convergence on an agreed upon standard interface for browsers since nobody would respect it. Their banners are at least simple yes/no ones without dark patterns.

            But yes, perhaps they should have worked with e.g. Mozilla to develop some kind of standard browser interface for this.

        • deanc 6 hours ago

          It is impractical for me as a user. I have to click on a notice on every website on the internet before interacting with it - often which are very obtuse and don’t have a “reject all” button but a “manage my choices” button which takes to an even more convoluted menu.

          Instead of exactly as you say: a global browser option.

          As someone who has had to implement this crap repeatedly - I can’t even begin to imagine the amount of global time that has been wasted implementing this by everyone, fixing mistakes related to it and more importantly by users having to interact with it.

          • lcnielsen 5 hours ago

            Yeah, but the only reason for this time wasteage is because website operators refuse to accept what would become the fallback default of "minimal", for which they would not need to seek explicit consent. It's a kind of arbitrage, like those scammy website that send you into redirect loops with enticing headlines.

            The law is written to encourage such defaults if anything, it just wasn't profitable enough I guess.

            • fauigerzigerk 4 hours ago

              Not even EU institutions themselves are falling back on deaults that don't require cookie consent.

              I'm constantly clicking away cookie banners on UK government or NHS (our public healthcare system) websites. The ICO (UK privacy watchdog) requires cookie consent. The EU Data Protection Supervisor wants cookie consent. Almost everyone does.

              And you know why that is? It's not because they are scammy ad funded sites or because of government surveillance. It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors.

              This is impractical, unreasonable, counterproductive and unintelligent.

              • grues-dinner 3 hours ago

                > completely reasonable

                This is a personal decision to be made by the data "donor".

                The NHS website cookie banner (which does have a correct implementation in that the "no consent" button is of equal prominence to the "mi data es su data" button) says:

                > We'd also like to use analytics cookies. These collect feedback and send information about how our site is used to services called Adobe Analytics, Adobe Target, Qualtrics Feedback and Google Analytics. We use this information to improve our site.

                In my opinion, it is not, as described, "completely reasonable" to consider such data hand-off to third parties as implicitly consented to. I may trust the NHS but I may not trust their partners.

                If the data collected is strictly required for the delivery of the service and is used only for that purpose and destroyed when the purpose is fulfilled (say, login session management), you don't need a banner.

                The NHS website is in a slightly tricky position, because I genuinely think they will be trying to use the data for site and service improvement, at least for now, and they hopefully have done their homework to make sure Adobe, say, are also not misusing the data. Do I think the same from, say, the Daily Mail website? Absolutely not, they'll be selling every scrap of data before the TCP connection even closes to anyone paying. Now, I may know the Daily Mail is a wretched hive of villainy and can just not go there, but I do not know about every website I visit. Sadly the scumbags are why no-one gets nice things.

                • fauigerzigerk 2 hours ago

                  >This is a personal decision to be made by the data "donor".

                  My problem is that users cannot make this personal decision based on the cookie consent banners because all sites have to request this consent even if they do exactly what they should be doing in their users' interest. There's no useful signal in this noise.

                  The worst data harvesters look exactly the same as a site that does basic traffic analysis for basic usability purposes.

                  The law makes it easy for the worst offenders to hide behind everyone else. That's why I'm calling it counterproductive.

                  [Edit] Wrt NHS specifically - this is a case in point. They use some tools to analyse traffic in order to improve their website. If they honour their own privacy policy, they will have configured those tools accordingly.

                  I understand that this can still be criticised from various angles. But is this criticism worth destroying the effectiveness of the law and burying far more important distinctions?

                  The law makes the NHS and Daily Mail look exactly the same to users as far as privacy and data protection is concered. This is completely misleading, don't you think?

                  • 1718627440 an hour ago

                    > even if they do exactly what they should be doing in their users' interest

                    If they only do this, they don't need to show anything.

                    • fauigerzigerk 39 minutes ago

                      Then we clearly disagree on what they should be doing.

                      And this is the crux of the problem. The law helps a tiny minority of people enforce an extremely (and in my view pointlessly) strict version of privacy at the cost of misleading everybody else into thinking that using analytics for the purpose of making usability improvements is basically the same thing as sending personal data to 500 data brokers to make money off of it.

              • troupo 3 hours ago

                > It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors

                Yup. That's what those 2000+ "partners" are all about if you believe their "legitimate interest" claims: "improve traffic"

              • FirmwareBurner 3 hours ago

                >This is impractical, unreasonable, counterproductive and unintelligent.

                It keeps the political grifters who make these regulations employed, that's kind of the main point in EU/UKs endless stream of regulations upon regulations.

            • deanc 5 hours ago

              The reality is the data that is gathered is so much more valuable and accurate if you gather consent when you are running a business. Defaulting to a minimal config is just not practical for most businesses either. The decisions that are made with proper tracking data have a real business impact (I can see it myself - working at a client with 7 figure monthly revenue).

              Im fully supportive of consent, but the way it is implemented is impractical from everyone’s POV and I stand by that.

              • bfg_9k 5 hours ago

                Are you genuinely trying to defend businesses unnecessarily tracking users online? Why can't businesses sell their core product(s) and you know... not track users? If they did that, then they wouldn't need to implement a cookie banner.

                • deanc 4 hours ago

                  Retargetting etc is massive revenue for online retailers. I support their right to do it if users consent to it. I don’t support their right to do it if users have not consented.

                  The conversation is not about my opinion on tracking, anyway. It’s about the impracticality of implementing the legislation that is hostile and time consuming for both website owners and users alike

                  • owebmaster 2 hours ago

                    > Retargetting etc is massive revenue for online retailers

                    Drug trafficking, stealing, scams are massive revenue for gangs.

                • lcnielsen 4 hours ago

                  Plus with any kind of effort put into a standard browser setting you could easily have some granularity, like: accept anonymous ephemeral data collected to improve website, but not stuff shared with third parties, or anything collected for the purpose of tailoring content or recommendations for you.

                • artathred 2 hours ago

                  Are you genuinely acting this obtuse? what do you think walmart and every single retailer does when you walk into a physical store? it’s always constant monitoring to be able to provide a better customer experience. This doesn’t change with online, businesses want to improve their service and they need the data to do so.

                  • 1718627440 an hour ago

                    If you're talking about the same jurisdiction of this privacy laws, then this is illegal. Your are only allowed to retain videos for 24h and only use it for basically calling the police.

                  • owebmaster 2 hours ago

                    > it’s always constant monitoring to be able to provide a better customer experience

                    This part gave me a genuine laugh. Good joke.

                    • artathred 2 hours ago

                      ah yes because walmart wants to harvest your in-store video data so they can eventually clone you right?

                      adjusts tinfoil hat

              • user5534762135 5 hours ago

                That is only true if you agree with ad platforms that tracking ads are fundamentally required for businesses, which is trivially untrue for most enterprises. Forcing businesses to get off privacy violating tracking practices is good, and it's not the EU that's at fault for forcing companies to be open about ad networks' intransigence on that part.

              • discreteevent 5 hours ago

                > just not practical for most businesses

                I don't think practical is the right word here. All the businesses in the world operated without tracking until the mid 90s.

              • ta1243 5 hours ago

                Why would I ever want to consent to you abusing my data?

          • 1718627440 an hour ago

            I don't have to, because there are add-ons to reject everything.

          • tcfhgj 2 hours ago

            Just don't process any personal data by default when not I inherently required -> no banner required.

  • dmix 16 hours ago

    Lovely when they try to regulate a burgeoning market before we have any idea what the market is going to look like in a couple years.

    • remram 16 hours ago

      The whole point of regulating it is to shape what it will look like in a couple of years.

      • dmix 16 hours ago

        Regulators often barely grasp how current markets function and they are supposed to be futurists now too? Government regulatory interests almost always end up lining up with protecting entrenched interests, so it's essentially asking for a slow moving group of the same mega companies. Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.

        • stuaxo 10 hours ago

          The EU is founded on the idea of markets and regulation.

          • miohtama 6 hours ago

            The EU is founded on the idea of useless bureaucracy.

            It's not just IT. Ask any EU farmer.

            • fxtentacle 5 hours ago

              Contrary to the constant whining, most of them are actually quite wealthy. And thanks to strong right to repair laws, they can keep using John Deere equipment without paying extortionate licensing fees.

              • mavhc 4 hours ago

                They're wealthy because they were paid for not using their agricultural land, so they cropped down all the trees on parts of their land that they couldn't use, to classify it as agricultural, got paid, and as a side effect caused downstream flooding

                • pyman an hour ago

                  Just to stay on topic: outside the US there's a general rule of thumb: if Meta is against it, the EU is probably doing something right.

        • krainboltgreene 16 hours ago

          So the solution is to allow the actual entrenched interests to determine the future of things when they also barely grasp how the current markets function and are currently proclaiming to be futurists?

          • tjwebbnorfolk 13 hours ago

            The best way for "entrenched interests" to stifle competition is to buy/encourage regulation that keeps everybody else out of their sandbox pre-emptively.

            For reference, see every highly-regulated industry everywhere.

            You think Sam Altman was in testifying to the US Congress begging for AI regulation because he's just a super nice guy?

            • goatlover 11 hours ago

              Regulation exists because of monopolistic practices and abuses in the early 20th century.

              • dmix 11 hours ago

                That's a bit oversimplified. Humans have been creating authority systems trying to control others lives and business since formal societies have been a thing, likely even before agriculture. History is also full of examples of arbitrary and counter productive attempts at control, which is a product of basic human nature combined with power, and why we must always be skeptical.

                • verisimi 9 hours ago

                  As a member of 'humanity', do you find yourself creating authority systems for AI though? No.

                  If you are paying for lobbyists to write the legislation you want, as corporations do, you get the law you want - that excludes competition, funds your errors etc.

                  The point is you are not dealing with 'humanity', you are dealing with those who represent authority for humanity - not the same thing at all. Connected politicians/CEOs etc are not actually representing 'humanity' - they merely say that they are doing so, while representing themselves.

              • keysdev 10 hours ago

                That can be, however regulation has just changed monopolistic practices to even more profitable oligarchaistic practices. Just look at Standard Oil.

          • buggyinout 15 hours ago

            They’re demanding collective conversation. You don’t have to be involved if you prefer to be asocial except to post impotent rage online.

            Same way the pols aren’t futurists and perfect neither is anyone else. Everyone should sit at the table and discuss this like adults.

            You want to go live in the hills alone, go for it, Dick Proenneke. Society is people working collectively.

          • betaby 15 hours ago

            Won't somebody please think of the children?

            • johnisgood 4 hours ago

              Yes, a common rhetoric, and terrorism and national security.

        • messe 7 hours ago

          > Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.

          Preferable to a burgeoning oligarchy.

          • adastra22 5 hours ago

            No, that... that's exactly what we have today. An oligarchy persists through captured state regulation. A more free market would have a constantly changing top.

            • messe 4 hours ago

              Historically, freer markets have lead to monopolies. It's why we have antitrust regulations in the first place (now if only they were enforced...)

              • adastra22 3 hours ago

                Depends on the time horizon you look at. A completely unregulated market usually ends up dominated by monopolists… who last a generation or two and then are usurped and become declining oligarchs. True all the way back to the Medici.

                In a rigidly regulated market with preemptive action by regulators (like EU, Japan) you end up with a persistent oligarchy that is never replaced. An aristocracy of sorts.

                The middle road is the best. Set up a fair playing field and rules of the game, but allow innovation to happen unhindered, until the dust has settled. There should be regulation, but the rules must be bought with blood. The risk of premature regulation is worse.

                • messe an hour ago

                  > There should be regulation, but the rules must be bought with blood.

                  That's an awfully callous approach, and displays a disturbing lack of empathy toward other people.

      • olalonde 15 hours ago

        You're both right, and that's exactly how early regulation often ends up stifling innovation. Trying to shape a market too soon tends to lock in assumptions that later prove wrong.

        • TFYS 8 hours ago

          Sometimes you can't reverse the damage and societal change after the market has already been created and shaped. Look at fossil fuels, plastic, social media, etc. We're now dependent on things that cause us harm, the damage done is irreversible and regulation is no longer possible because these innovations are now embedded in the foundations of modern society.

          Innovation is good, but there's no need to go as fast as possible. We can be careful about things and study the effects more deeply before unleashing life changing technologies into the world. Now we're seeing the internet get destroyed by LLMs because a few people decided it was ok to do so. The benefits of this are not even clear yet, but we're still doing it just because we can. It's like driving a car at full speed into a corner just to see what's behind it.

          • sneak 6 hours ago

            I think it’s one of those “everyone knows” things that plastic and social media are bad, but I think the world without them is way, way worse. People focus on these popular narratives but if people thought social media was bad, they wouldn’t use it.

            Personally, I don’t think they’re bad. Plastic isn’t that harmful, and neither is social media.

            I think people romanticize the past and status quo. Change is scary, so when things change and the world is bad, it is easy to point at anything that changed and say “see, the change is what did it!”

            • TFYS 6 hours ago

              People don't use things that they know are bad, but someone who has grown up in an environment where everyone uses social media for example, can't know that it's bad because they can't experience the alternative anymore. We don't know the effects all the accumulating plastic has on our bodies. The positive effects of these things can be bigger than the negative ones, but we can't know that because we're not even trying to figure it out. Sometimes it might be impossible to find out all the effects before large scale adoption, but still we should at least try. Currently the only study we do before deciding is the one to figure out if it'll make a profit for the owner.

              • sneak 5 hours ago

                > We don't know the effects all the accumulating plastic has on our bodies.

                This is handwaving. We can be pretty well sure at this point what the effects aren’t, given their widespread prevalence for generations. We have a 2+ billion sample size.

                • TFYS 3 hours ago

                  No, we can't be sure. There's a lot of diseases that we don't know the cause of, for example. Cancers, dementia, Alzheimer's, etc. There is a possibility that the rates of those diseases are higher because of plastics. Plastic pollution also accumulates, there was a lot less plastic in the environment a few decades ago. We add more faster than it gets removed, and there could be some threshold after which it becomes more of an issue. We might see the effect a few decades from now. Not only on humans, but it's everywhere in the environment now, affecting all life on earth.

            • staunton 4 hours ago

              > if people thought social media was bad, they wouldn’t use it.

              Do you think Heroin is good?

              • TFYS 3 hours ago

                I'm sure it's very good the first time you take it. If you don't consider all the effects before taking it, it does make sense. You feel very good, but the even stronger negative effects come after. Same can be said about a lot of technology.

              • sneak 3 hours ago

                Is the implication in your question that social media is addictive and should be banned or regulated on that basis?

                While some people get addicted to it, the vast majority of users are not addicts. They choose to use it.

                • staunton 3 hours ago

                  Addiction is a matter of degree. There's a bunch of polls where a large majority of people strongly agree that "they spend too much time on social media". Are they addicts? Are they "coosing to use it"? Are they saying it's too much because that's a trendy thing to say?

              • Lionga 3 hours ago

                People who take Heroin think it is good in the situation they are taking it.

          • FirmwareBurner 3 hours ago

            > Look at fossil fuels

            WHAT?! Do you think we as humanity would have gotten to all the modern inventions we have today like the internet, space travel, atomic energy, if we had skipped the fossil fuel era by preemptively regulating it?

            How do you imagine that? Unless you invent a time machine, go to the past, and give inventors schematics of modern tech achievable without fossil fuels.

            • 1718627440 an hour ago

              The internet was created in the military at the start of the fossil era, there is no reason, why it should be affected by the oil era. If we wouldn't travel that much, because we don't use cars and planes that much, the internet would be even more important.

              Space travel does need a lot of oil, so it might be affected, but the beginning of it were in the 40s so the research idea was already there.

              Atomic energy is also from the 40s and might have been the alternative to oil, so it would thrive more if we haven't used oil that much.

              Also all 3 ARE heavily regulated and mostly done by nation states.

            • TFYS 3 hours ago

              Maybe not as fast as we did, but eventually we would have. Maybe more research would have been put into other forms of energy if the effects of fossil fuels were considered more thoroughly and usage was limited to a degree that didn't have a chance cause such fast climate change. And so what if the rate of progress would have been slower and we'd be 50 years behind current tech? At least we wouldn't have to worry about all the damage we've caused now, and the costs associated with that. Due to that damage our future progress might halt while a slower, more careful society would continue advancing far into the future.

        • mycall 11 hours ago

          Depends what those assumptions are. If by protecting humans from AI gross negligence, then the assumptions are predetermined to be siding towards human normals (just one example). Lets hope logic and understanding of the long term situation proceeds the arguments in the rulesets.

          • dmix 11 hours ago

            You're just guessing as much as anyone. Almost every generation in history has had doomers predicting the fall of their corner of civilization from some new thing. From religion schisms, printing presses, radio, TV, advertisements, the internet, etc. You can look at some of the earliest writings by English priests in the 1500s predicting social decay and destruction of society which would sound exactly like social media posts in 2025 about AI. We should at a minimum under the problem space before restricting it, especially given the nature of policy being extremely slow to change (see: copyright).

            • esperent 10 hours ago

              I'd urge you to read a book like Black Swan, or study up on statistics.

              Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.

              (+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.

      • felipeerias 15 hours ago

        The experience with other industries like cars (specially EV) shows that the ability of EU regulators to shape global and home markets is a lot more limited than they like to think.

        • imachine1980_ 12 hours ago

          Not really china make big policy bet a decade early and win the battle the put the whole government to buy this new tech before everyone else, forcing buses to be electric if you want the federal level thumbs up, or the lottery system for example.

          So I disagree, probably Europe will be even more behind in ev if they doesn't push eu manufacturers to invest so heavily in the industry.

          You can se for example than for legacy manufacturers the only ones in the top ten are Europeans being 3 out of 10 companies, not Japanese or Korean for example, and in Europe Volkswagen already overtake Tesla in sales Q1 for example and Audi isn't that much away also.

      • jabjq 15 hours ago

        What will happen, like every time a market is regulated in the EU, is that the market will move on without the EU.

      • energy123 13 hours ago

        The point is to stop and deter market failure, not anticipate hypothetical market failure

      • adastra22 5 hours ago

        That has never worked.

      • CamperBob2 14 hours ago

        If the regulators were qualified to work in the industry, then guess what: they'd be working in the industry.

    • amelius 15 hours ago

      We know what the market will look like. Quasi monopoly and basic user rights violated.

    • troupo 3 hours ago

      > before we have any idea what the market is going to look like in a couple years.

      Oh, we already know large chunks of it, and the regulations explicitly address that.

      If the chest-beating crowd would be presented with these regulations piecemeal, without ever mentioning EU, they'd probably be in overwhelming support of each part.

      But since they don't care to read anything and have an instinctive aversion to all things regulatory and most things EU, we get the boos and the jeers

    • ulfw 15 hours ago

      Regulating it while the cat is out of the bag leads to monopolistic conglomerates like Meta and Google. Meta shouldn't have been allowed to usurp instagram and whatsapp, Google shouldn't have been allowed to bring Youtube into the fold. Now it's too late to regulate a way out of this.

      • pbh101 11 hours ago

        It’s easy to say this in hindsight, though this is the first time I think I’ve seen someone say that about YouTube even though I’ve seen it about Instagram and WhatsApp a lot.

        The YouTube deal was a lot earlier than Instagram, 2006. Google was way smaller than now. iPhone wasn’t announced. And it wasn’t two social networks merging.

        Very hard to see how regulators could have the clairvoyance to see into this specific future and its counter-factual.

      • user5534762135 5 hours ago

        >Now it's too late to regulate a way out of this.

        Technically untrue, monopoly busting is a kind of regulation. I wouldn't bet on it happening on any meaningful scale, given how strongly IT benefits from economies of scale, but we could be surprised.

    • rapatel0 12 hours ago

      I literally lived this with GDPR. In the beginning every one ran around pretending to understand what it meant. There were a ton of consultants and lawyers that basically made up stuff that barely made sense. They grifted money out of startups by taking the most aggressive interpretation and selling policy templates.

      In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.

      Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.

      • CalRobert 8 hours ago

        The main thing is the EU basically didn’t enforce it. I was really excited for data portability but it hasn’t really come to pass

      • troupo 9 hours ago

        > In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years.

        Is the same regulation that was introduced in 2016. The only people who pretend not to understand it are those who think that selling user data to 2000+ "partners" is privacy

    • verisimi 9 hours ago

      Exactly. No anonymity, no thought crime, lots of filters to screen out bad misinformation, etc. Regulate it.

    • ekianjo 15 hours ago

      they dont want a marlet. They want total control, as usual for control freaks.

  • zizee 15 hours ago

    It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?

    • Aurornis 14 hours ago

      > that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?

      LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.

      It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?

      • Jensson 14 hours ago

        > LLMs are hardly reliable ways to reproduce copyrighted works

        Only because the companies are intentionally making it so. If they weren't trained to not reproduce copyrighted works they would be able to.

        • ben_w 7 hours ago

          They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.

          The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.

          The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.

        • jazzyjackson 12 hours ago

          it's like these people never tried asking for song lyrics

        • terminalshort 13 hours ago

          LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.

          • tomschwiha 6 hours ago

            You can also ask people to repeat a text and some will fail. What I want to say is that even if some LLMs (probably only older ones) will fail doesn't mean future ones will fail (in the majority). Especially if benchmarks indicate they are becoming smarter over time.

    • CamperBob2 14 hours ago

      I have a Xerox machine that can reliably reproduce copyrighted works. Is that a problem, too?

      Blaming tools for the actions of their users is stupid.

      • threetonesun 14 hours ago

        If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.

        • zettabomb 8 hours ago

          Xerox already went through that lawsuit and won, which is why photocopiers still exist. The tool isn't in the wrong for being told to print out the copyrighted works. The user still had to make the conscious decision to copy that particular work. Hence, still the user's fault.

          • 1718627440 an hour ago

            You take the copyrighted work to the printer, you don't upload data to an LLM first, it is already in the machine. If you got LLMs without training data (however that works) and the user needs to provide the data, then it would be ok.

        • Aurornis 14 hours ago

          LLMs do not have all copyrighted works in them.

          In some cases they can be prompted to guess a number of tokens that follow an excerpt from another work.

          They do not contain all copyrighted works, though. That’s an incorrect understanding.

        • monetus 14 hours ago

          Are there any LLMs available with a, "give me copyrighted material" button? I don't think that is how they work.

          Commercial use of someone's image also already has laws concerning that as far as I know, don't they?

      • saghm 4 hours ago

        If I've copied someone else's copyrighted work on my Xerox machine, then give it to you, you can't reproduce the work I copied. If I leave a copy of it in the scanner when I give it to you, that's another story. The issue here isn't the ability of an LLM to produce it when I provide it with the copyrighted work as an input, it's whether or not there's an input baked-in at the time of distribution that gives it the ability to continue producing it even if the person who receives it doesn't have access to the work to provide it in the first place.

        To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.

        • visarga 3 hours ago

          You can train a LLM on completely clean data, creative commons and legally licensed text, and at inference time someone will just put a whole article or chapter in the model and has full access to regenerate it however they like.

      • zeta0134 14 hours ago

        Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.

        https://en.wikipedia.org/wiki/Printer_tracking_dots

        • ChadNauseam 14 hours ago

          i believe only color printers are known to have this functionality, and it’s typically used for detecting counterfeit, not for enforcing copyright

          • zeta0134 14 hours ago

            You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.

            • justinclift 14 hours ago

              > Still, it's a decent example of blaming the tool for the actions of its users.

              They're not really "blaming" the tool though. They're using a supply chain attack against the subset of users they're interested in.

      • fodkodrasz 11 hours ago

        According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).

        In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.

        Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)

  • badsectoracula 7 hours ago

    > One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way

    AFAICT the actual text of the act[0] does not mention anything like that. The closest to what you describe is part of the chapter on copyright of the Code of Practice[1], however the code does not add any new requirements to the act (it is not even part of the act itself). What it does is to present a way (which does not mean it is the only one) to comply with the act's requirements (as a relevant example, the act requires to respect machine-readable opt-out mechanisms when training but doesn't specify which ones, but the code of practice explicitly mentions respecting robots.txt during web scraping).

    The part about copyright outputs in the code is actually (measure 1.4):

    > (1) In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:

    > a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner, and

    > b) to prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents, or in case of general-purpose AI models released under free and open source licenses to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.

    > (2) This Measure applies irrespective of whether a Signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations.

    Keep in mind that "Signatories" here is whoever signed the Code of Practice: obviously if i make my own AI model and do not sign that code of practice myself (but i still follow the act requirements), someone picking up my AI model and signing the Code of Practice themselves doesn't obligate me to follow it too. That'd be like someone releasing a plugin for Photoshop under the GPL and then demanding Adobe release Photoshop's source code.

    As for open source models, the "(1b)" above is quite clear (for open source models that want to use this code of practice - which they do not have to!) that all they have to do is to mention in their documentation that their users should not generate copyright infringing content with them.

    In fact the act has a lot of exceptions for open-source models. AFAIK Meta's beef with the act is that the EU AI office (or whatever it is called, i do not remember) does not recognize Meta's AI as open source, so they do not get to benefit from those exceptions, though i'm not sure about the details here.

    [0] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:...

    [1] https://ec.europa.eu/newsroom/dae/redirection/document/11811...

vanderZwan 20 hours ago

I admit that I am biased enough to immediately expect the AI agreement to be exactly what we need right now if this is how Meta reacts to it. Which I know is stupid because I genuinely have no idea what is in it.

  • mhitza 20 hours ago

    There seem to be 3 chapters of this "AI Code of Practice" https://digital-strategy.ec.europa.eu/en/policies/contents-c... and it's drafting history https://digital-strategy.ec.europa.eu/en/policies/ai-code-pr...

    I did not read it yet, only familiar with the previous AI Act https://artificialintelligenceact.eu/ .

    If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)

    • jahewson 20 hours ago

      > is it clear yet if it can be called fair use?

      Yes.

      https://www.publishersweekly.com/pw/by-topic/digital/copyrig...

      Though the EU has its own courts and laws.

      • GuB-42 13 hours ago

        If France, fair use doesn't even exist!

        We have exceptions, which are similar, but the important difference is that courts decide what is fair and what is not, whereas exceptions are written in law. It is a more rigid system that tend to favor copyright owners because if what is seen as "fair" doesn't fit one of the listed exceptions, copyright still applies. Note that AI training probably fits one of the exceptions in French law (but again, it is complicated).

        I don't know the law in other European countries, but AFAIK, EU and international directives don't do much to address the exceptions to copyright, so it is up to each individual country.

        • mikae1 7 hours ago

          > If France, fair use doesn't even exist!

          Same in Sweden. The U.S. has one of the broadest and most flexible fair use laws.

          In Sweden we have "citaträtten" (the right to quote). It only applies to text and it is usually said that you can't quote more than 20% of the original text.

      • dmbche 20 hours ago

        District judge pretrial ruling on June 25th, I'd be surprised this doesn't get challenged soon in higher courts.

        And acquiring the copyrighted materials is still illegal - this is not a blanket protection for all AI training on copyrighted materials

        • thewebguyd 16 hours ago

          Even if it gets challenged successfully (and tbh I hope it does), the damage is already done. Blocking it at this stage just pulls up the ladder behind the behemoths.

          Unless the courts are willing to put injunctions on any model that made use of illegally obtained copyrighted material - which would pretty much be all of them.

          • 1718627440 15 minutes ago

            But a ruling can determine that the results of the violation needs to be destroyed.

        • zettabomb 8 hours ago

          Anthropic bought millions of books and scanned them, meaning that (at least for those sources) they were legally obtained. There has also been rampant piracy used to obtain similar material, which I won't defend. But it's not an absolute - training can be done on legally acquired material.

  • tjwebbnorfolk 13 hours ago

    Being evil doesn't make them necessarily wrong.

    • vanderZwan 11 hours ago

      Agreed, that's why I'm calling out the stupidity of my own bias.

  • voidfunc 15 hours ago

    [flagged]

    • ks2048 15 hours ago

      It seems EU governments should be preventing US companies from dominating their countries.

    • slater 15 hours ago

      [flagged]

    • j_maffe 15 hours ago

      You really went all out with showing your contempt, huh? I'm glad that you're enjoying the tech companies utterly dominating US citizens in the process

cakealert 10 hours ago

EU regulations are sometimes able to bully the world into compliance (eg. cookies).

Usually minorities are able to impose "wins" on a majority when the price of compliance is lower than the price of defiance.

This is not the case with AI. The stakes are enormous. AI is full steam ahead and no one is getting in the way short of nuclear war.

  • oaiey 9 hours ago

    But AI also carries tremendous risks, from something simple as automating warfare to something like a evil AGI.

    In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.

    Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!

    • tim333 3 hours ago

      I think your machine gun example illustrates people are quite capable of masacreing each other without AI or even high tech - in past periods sometimes over 30% of males died in warfare. While AI could get involved it's kind of a separate thing.

      • FirmwareBurner 3 hours ago

        Yeah, his automated gun phobia argument is dumb. Should we ban all future tech development because some people are a scared of some things that can be dangerous but useful? NO.

        Plus, ironically, Germany's Rheinmetall is a leader in automated anti-air guns so the people's phobia of automated guns is pointless and, at least in this case, common sense won, but in many others like nuclear energy, it lost.

        It seems like Germans area easy to manipulate to get them to go against their best interests, if you manage to trigger some phobias in them via propaganda. "Ohoohoh look out, it's the nuclear boogieman, now switch your economy to Russian gas instead, it's safer"

        • 1718627440 18 minutes ago

          The switching to russian gas is bad for know, but was rational back then. The idea was to give russia leverage on europe besides war, so that they don't need war.

    • zettabomb 8 hours ago

      I don't disagree that we need regulation, but I also think citing literal fiction isn't a good argument. We're also very, very far away from anything approaching AGI, so the idea of it becoming evil seems a bit far fetched.

      • ben_w 7 hours ago

        I agree fiction is a bad argument.

        On the other hand, firstly every single person disagrees what the phrase AGI means, varying from "we've had it for years already" to "the ability to do provably impossible things like solve the halting problem"; and secondly we have a very bad track record for knowing how long it will take to invent anything in the field of AI with both positive and negative failures, for example constantly thinking that self driving cars are just around the corner vs. people saying an AI that could play Go well was "decades" away a mere few months before it beat the world champion.

      • tim333 3 hours ago

        Did you catch the news about Grok wanting to kill the jews last week? All you need for AI or AGI to be evil is a prompt saying be evil.

      • HighGoldstein 7 hours ago

        Autonomous sentry turrets have already been a thing since the 2000s. If we assume that military technology is always at least some 5-10 years ahead of civilian, it is likely that some if not all of the "defense" contractors have far more terrifying autonomous weapons.

      • ken47 3 hours ago

        We don't need AGI in order for AI to destroy humanity.

    • chii 5 hours ago

      regulation does not stop weapons from being created that utilizes AI. It only slows down honest states that try to abide by it, and gives the dishonest ones a head start.

      Guess what happens to the race then?

    • stainablesteel 2 hours ago

      you can choose to live in fear, the rest of us are embracing growth

  • encom 6 hours ago

    The only thing the cookie law has accomplished for users, is pestering everyone with endless popups (full of dark patterns). WWW is pretty much unbearable to use without uBlock filtering that nonsense away. User tracking and fingerprinting has moved server side. Zero user privacy has been gained, because there's too much money to be made and the industry routed around this brain dead legislation.

    • red_trumpet 5 hours ago

      > User tracking and fingerprinting has moved server side.

      This smells like a misconception of the GDPR. The GDPR is not about cookies, it is about tracking. You are not allowed to track your users without consent, even if you do not use any cookies.

      • whatevaa 2 hours ago

        Login is tracking, even when login is functional, not for tracking.

        Laws are analyzed by lawyers and they will err on side of caution, so you end up with these notices.

jahewson 20 hours ago

There’s a summary of the guidelines here for anyone who is wondering:

https://artificialintelligenceact.eu/introduction-to-code-of...

It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.

  • felipeerias 14 hours ago

    These regulations may end up creating a trap for European companies.

    Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.

    Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.

    On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.

    Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.

    • randomNumber7 7 hours ago

      Also EU Users will try to use the better AI products with e.g. a VPN to the US.

      • aniviacat 4 hours ago

        Most won't. Remember that this is an issue almost noone (outside a certain bubble) is aware of.

        • tim333 3 hours ago

          Well, if there's not much difference why bother. If there are copyright restrictions on things people care about Europeans are perfectly capable of bypassing restrictions, like watching the ending of Game of Thrones etc.

        • FirmwareBurner 3 hours ago

          Haha, huge, HUGE L-take. Go to any library or coffeeshop, and you'll see most students on their laptops are on ChatGPT. Do you think they won't immediately figure out how to use a VPN to move to the "better" models from the US or China if the EU regulations cripple the ones available in the EU?

          EU's preemptive war on AI will be like the RIAA's war on music piracy. EU consumers will get their digital stuff one way or another, only EU's domestic products will just fall behind by not competing to create a equally good product that the consumers want.

          • aniviacat an hour ago

            > Do you think they won't immediately figure out how to use a VPN to move to the "better" models

            I think they don't even know the term "model" (in AI context), let alone which one's the best. They only know ChatGPT.

            I do think it's possible that stories spread like "the new cool ChatGPT update is US-only: Here's how to access it in the EU".

            However I don't think many will make use of that.

            Anecdotally, most people around me (even CS colleagues) only use the standard model, ChatGPT 4o, and don't even take a look at the other options.

            Additionally, AI companies could quickly get in trouble if they accept payments from EU credit cards.

    • thrance 3 hours ago

      It's always the same argument, and it is true. The US retained an edge over the rest of the world through deregulating tech.

      My issue with this is that it doesn't look like America's laissez-faire stance on this issues helped Americans much. Internet companies have gotten absolutely humongous and gave rise to a new class of techno-oligarchs that are now funding anti-democracy campaigns.

      I feel like getting slightly less performant models is a fair price to pay for increased scrutiny over these powerful private actors.

    • Workaccount2 12 hours ago

      And then they'll get fined a few billion anyway to cover the gap for no European tech to tax.

      • izacus 3 hours ago

        As an European, this sounds like an excellent solution.

        US megatech funding our public infrastructure? Amazing. Especially after US attacked us with tarrifs.

        • Workaccount2 3 hours ago

          Just like Russian mega-energy powering your grid?

          Bad idea.

          Europe is digging a hole of a combination of suffocating regulation and dependance on foreign players. It's so dumb, but Europeans are so used to it they can't see the problem.

  • l5870uoo9y 7 hours ago

    It's basically micromanaging an industry that European countries have not been able to cultivate themselves. It's legislation for legislation's sake. If you had a naive hope that Mario Draghi's gloomy report on the EU's competitiveness would pave the way for a political breakthrough in the EU - one is tempted to say something along the lines of communist China's market reforms in the 70s - then you have to conclude that the EU is continuing in exactly the same direction. I have actually lost faith in the EU.

  • troupo 9 hours ago

    > It’s certainly onerous.

    What exactly is onerous about it?

  • cm2012 20 hours ago

    [flagged]

    • Atotalnoob 20 hours ago

      This all seems fine.

      Most of these items should be implemented by major providers…

      • techjamie 19 hours ago

        The problem is this severely harms the ability to release opens weights models, and only leaves the average person with options that aren't good for privacy.

    • isoprophlex 20 hours ago

      I don't care about your overly verbose, blandly written slop. If I wanted a llm summary, I would ask an llm myself.

      This really is the 2025 equivalent to posting links to a google result page, imo.

      • marcellus23 20 hours ago

        More verbose than the source text? And who cares about bland writing when you're summarizing a legal text?

      • rokkamokka 20 hours ago

        It is... helpful though. More so than your reply

        • isoprophlex 20 hours ago

          Touché, I'll grant you that.

      • JonChesterfield 19 hours ago

        Nope. This text is embedded in HN and will survive rather better than the prompt or the search result, both of which are non-reproducible. It may bear no relation to reality but at least it won't abruptly disappear.

        • jjulius 13 hours ago

          Unless, ya know, it gets marked as Flagged/Dead.

rockemsockem 20 hours ago

I'm surprised that most of the comments here are siding with Europe blindly?

Am I the only one who assumes by default that European regulation will be heavy-handed and ill conceived?

  • satellite2 19 hours ago

    Well Europe haven't enacted policies actually breaking American monopolies until now.

    Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.

    A position which is essentially reasonable if not too polite.

    • almatabata 19 hours ago

      > So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.

      When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.

      After seeing this news https://observer.co.uk/news/columnists/article/the-networker..., how can you have any faith that they will play nice?

      You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.

      If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.

      • _zoltan_ 15 hours ago

        The EU sucks at venture capital.

  • remram 16 hours ago

    "blindly"? Only if you assume you are right in your opinion can you arrive at the conclusion that your detractors didn't learn about it.

    Since you then admit to "assume by default", are you sure you are not what you complain about?

  • notyourwork 16 hours ago

    What is bad about heavy handed regulation to protect citizens?

    • felipeerias 15 hours ago

      That it is very likely not going to work as advertised, and might even backfire.

      The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.

    • rdm_blackhole 5 hours ago

      The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". But it's for our own good you see? The EU knows best and it wants your data without limits and without probable cause. Everyone is a suspect.

      1984 wasn't supposed to be a blueprint.

    • terminalshort 13 hours ago

      This is the same entity that has literally ruled that you can be charged with blasphemy for insulting religious figures, so intent to protect citizens is not a motive I ascribe to them.

    • mensetmanusman 14 hours ago

      Will they resort to turning off the Internet to protect citizens?

      • gnulinux996 10 hours ago

        Is this AI agreement about "turning off the Internet"?

      • justinclift 14 hours ago

        Or maybe just exclude Meta from the EU? :)

    • stainablesteel 2 hours ago

      what's bad about it is when people say "it's to protect citizens" when it's really a political move to control american companies

    • _zoltan_ 15 hours ago

      it does not protect citizens? the EU shoves down a lot of the member state's throats.

    • Workaccount2 11 hours ago

      You end up with anemic industry and heavy dependability on foreign players.

    • wtcactus 4 hours ago

      Because it doesn't protect us.

      It just creates barriers for internal players, while giving a massive head start for evil outside players.

    • marginalia_nu 16 hours ago

      A good example of how this can end up with negative outcomes is the cookie directive, which is how we ended up with cookie consent popovers on every website that does absolutely nothing to prevent tracking and has only amounted to making lives more frustrating in the EU and abroad.

      It was a decade too late and written by people who were incredibly out of touch with the actual problem. The GDPR is a bit better, but it's still a far bigger nuisance for regular European citizens than the companies that still largely unhindered track and profile the same.

      • plopilop 15 hours ago

        Cookie consent popovers were the deliberate decisions of company to create the worst possible compliance. A much simpler one could have been to stop tracking users especially when it is not their primary business.

        Newer regulations also mandate that "reject all cookies" should be a one click action but surprisingly compliance is low. Once again, the enemy of the customer here is the company, not the eu regulation.

        • ChadNauseam 14 hours ago

          I don’t believe that every website has colluded to give themselves a horrible user experience in some kind of mass protest against the GDPR. My guess is that companies are acting in their interests, which is exactly what I expect them to do and if the EU is not capable of figuring out what that will look like then it is a valid criticism of their ability to make regulations

          • 1718627440 17 minutes ago

            Yet that user interface is against the law and enforcing the GDPR would improve it.

        • eastbound 13 hours ago

          Perfect example of regulation shaping a market. And succeeding at only ill results.

      • zizee 15 hours ago

        So because sometimes a regulation misses the mark, governments should not try to regulate?

        • marginalia_nu 15 hours ago

          Well, pragmatically, I'd say no. We must judge regulations not by the well wishes and intentions behind them but the actual outcomes they have. These regulations affect people, jobs and lives.

          The odds of the EU actually hitting a useful mark with these types of regulations, given their technical illiteracy, it's is just astronomically unlikely.

        • JumpCrisscross 15 hours ago

          I think OP is criticising blindly trusting the regulation hits the mark because Meta is mad about it. Zuckerberg can be a bastard and correctly call out a burdensome law.

      • thrance 3 hours ago

        Bad argument, the solution is not to not regulate, it's to make a new law mandating companies to make cookies opt-in behind a menu that can't be a banner. And if this somehow backfires too, we go again. Giving up is not the solution to the privacy crisis.

    • CamperBob2 14 hours ago

      "Even the very wise cannot see all ends." And these people aren't what I'd call "very wise."

      Meanwhile, nobody in China gives a flying fuck about regulators in the EU. You probably don't care about what the Chinese are doing now, but believe me, you will if the EU hands the next trillion-Euro market over to them without a fight.

  • seydor 4 hours ago

    It's just foreign interests trying to keep Europe down

  • troupo 9 hours ago

    > Am I the only one who assumes by default

    And that's the problem: assuming by default.

    How about not assuming by default? How about reading something about this? How about forming your own opinion, and not the opinion of the trillion- dollar supranational corporations?

  • campl3r 9 hours ago

    Or you know, some actually read it and agree?

  • 9dev 20 hours ago

    Maybe the others have put in a little more effort to understand the regulation before blindly criticising it? Similar to the GDPR, a lot of it is just common sense—if you don’t think that "the market" as represented by global mega-corps will just sort it out, that is.

    • Alupis 20 hours ago

      Our friends in the EU have a long history of well-intentioned but misguided policy and regulations, which has led to stunted growth in their tech sector.

      Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.

      • 9dev 20 hours ago

        Yet at the same time, they also have a long history of very successful policy, such as the USB-C issue, but also the GDPR, which has raised the issue of our right to privacy all over the world.

        How long can we let AI go without regulation? Just yesterday, there was a report here on Delta using AI to squeeze higher ticket prices from customers. Next up is insurance companies. How long do you want to watch? Until all accountability is gone for good?

        • pembrook 16 hours ago

          Hard disagree on both GDPR and USBC.

          If I had to pick a connector that the world was forced to use forever due to some European technocrat, I would not have picked usb-c.

          Hell, the ports on my MacBook are nearly shot just a few years in.

          Plus GDPR has created more value for lawyers and consultants than it has for EU citizens.

          • kaashif 15 hours ago

            The USB-C charging ports on my phones have always collected lint to the point they totally stop working and have to be cleaned out vigorously.

            I don't know how this problem is so much worse with USB-C or the physics behind it, but it's a very common issue.

            This port could be improved for sure.

            • user5534762135 5 hours ago

              As someone with both a usb-c and micro-usb phone, I can assure you that other connectors are not free of that problem. The micro-usb one definitely feels worse. Not sure about the old proprietary crap that used to be forced down our throats so we buy Apple AND Nokia chargers, and a new one for each model, too.

          • Renaud 14 hours ago

            > Plus GDPR has created more value for lawyers and consultants than it has for EU citizens.

            Monetary value, certainly, but that’s considering money as the only desirable value to measure against.

            • pembrook 13 hours ago

              Who said money. Time and human effort are the most valuable commodities.

              That time and effort wasted on consultants and lawyers could have been spent on more important problems or used to more efficiently solve the current one.

        • rockemsockem 19 hours ago

          I mean, getting USB-C to be usable on everything is like a nice-to-have, I wouldn't call it "very successful policy".

          • 9dev 18 hours ago

            It’s just an example. The EU has often, and often successfully, pushed for standardisation to the benefit of end users.

            • Alupis 18 hours ago

              Which... has the consequences of stifling innovation. Regulations/policy is two-way street.

              Who's to say USB-C is the end-all-be-all connector? We're happy with it today, but Apple's Lightning connector had merit. What if two new, competing connectors come out in a few year's time?

              The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market. Fast forward a decade when USB-C is dead, EU will keep it limping along - stifling more innovation along the way.

              Standardization like this is difficult to achieve via consensus - but via policy/regulation? These are the same governing bodies that hardly understand technology/internet. Normally standardization is achieved via two (or more) competing standards where one eventually "wins" via adoption.

              Well intentioned, but with negative side-effects.

              • sensanaty 7 hours ago

                If the industry comes out with a new, better connector, they can use it, as long as they also provide USB-C ports. If enough of them collectively decide the new one is superior, then they can start using that port in favor of USB-C altogether.

                The EU says nothing about USB-C being the bestest and greatest, they only say that companies have to come to a consensus and have to have 1 port that is shared between all devices for the sake of consumers.

                I personally much prefer USB-C over the horrid clusterfuck of proprietary cables that weren't compatible with one another, that's for sure.

              • troupo 3 hours ago

                > The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market.

                As in: the EU regulation literally addresses this. You'd know it if you didn't blindly repeat uneducated talking points by others who are as clueless as you are.

                > Standardization like this is difficult to achieve via consensus - but via policy/regulation?

                In the ancient times of 15 or so years ago every manufacturer had their own connector incompatible with each other. There would often be connectors incompatible with each other within a single manufacturer's product range.

                The EU said: settle on a single connector voluntarily, or else. At the time the industry settled on micro-USB and started working on USB-C. Hell, even Power Delivery wasn't standardized until USB-C.

                Consensus doesn't always work. Often you do need government intervention.

    • rockemsockem 19 hours ago

      I'm specifically referring to several comments that say they have not read the regulation at all, but think it must be good if Meta opposes it.

    • ars 20 hours ago

      > GDPR

      You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?

      • anonymousab 16 hours ago

        That is malicious compliance with the law, and more or less indicative of a failure of enforcement against offenders.

      • junto 20 hours ago

        No, the laws that ensures that private individuals have the power to know what is stored about them, change incorrect data, and have it deleted unless legally necessary to hold it - all in a timely manner and financially penalize companies that do not.

        • pelorat an hour ago

          > and have it deleted unless legally necessary to hold it

          Tell that to X which disables your ability to delete your account if it gets suspended.

      • sensanaty 7 hours ago

        No, GDPR is the law that allowed me to successfully request the deletion of everything companies like Meta have ever harvested on me without my consent and for them to permanently delete it.

        Fun fact, GitHub doesn't have cookie banners. It's almost like it's possible to run a huge site without being a parasite and harvesting every iota of data of your site's visitors!

      • cenamus 20 hours ago

        That's not the GDPR.

  • lovich 20 hours ago

    I’d side with Europe blindly over any corporation.

    The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by

    • rdm_blackhole 34 minutes ago

      The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". No limits and no probable cause required. Everyone is a suspect.

      Are you still sure you want to side blindly with the EU?

    • rockemsockem 19 hours ago

      All corporations that exist everywhere make worse decisions than Europe is a weirdly broad statement to make.

  • andrepd 20 hours ago

    So you're surprised that people are siding with Europe blindly, but you're "assuming by default" that you should side with Meta blindly.

    Perhaps it's easier to actually look at the points in contention to form your opinion.

    • rockemsockem 19 hours ago

      I don't remember saying anything about blindly deciding things being a good thing.

  • zeptonix 20 hours ago

    Everything in this thread even remotely anti-EU-regulation is being extreme downvoted

    • impossiblefork 15 hours ago

      The regulations are pretty reasonable though.

    • rockemsockem 19 hours ago

      Yeah it's kinda weird.

      Feels like I need to go find a tech site full of people who actually like tech instead of hating it.

      • wswope 14 hours ago

        Your opinions aren't the problem, and tech isn't the problem. It's entirely your bad-faith strawman arguments and trolling.

        https://news.ycombinator.com/item?id=44609135

        That feeling is correct: this site is better without you. Please put your money where your mouth is and leave.

      • asats 16 hours ago

        Don't know if I'm biased but it seems there has been a slow but consistent and accelerating redditification of hacker news.

        • randomNumber7 7 hours ago

          It's the AI hype and the people who think they are hackers because they can ask a LLM to write code.

      • trinsic2 14 hours ago

        No we like tech that works for the people/public, not against them. I know its a crazy idea.

      • blibble 16 hours ago

        I like tech

        I don't like meta or anything it has done, or stands for

      • j_maffe 15 hours ago

        Tech and techies don't like to be monopolized

      • troupo 9 hours ago

        As others have pointed out, we like tech.

        We don't like what trillion-dollar supranational corporations and infinite VC money are doing with tech.

        Hating things like "We're saving your precise movements and location for 10+ years" and "we're using AI to predict how much you can be charged for stuff" is not hating technology

      • guelo 12 hours ago

        If you don't hate big tech you haven't paying attention. Enshittification became a popular word for a reason.

      • OtomotO 16 hours ago

        I like tech, but I despise cults

    • vicnov 17 hours ago

      It is fascinating. I assume that the tech world is further to the left, and that interpretation of "left" is very pro-AI regulation.

    • gnulinux996 10 hours ago

      Are you suggesting something here?

  • OtomotO 16 hours ago

    Are you aware of the irony in your post?

  • xandrius 20 hours ago

    If I've got to side blindly with any entity it is definitely not going to be Meta. That's all there is.

    • jabjq 15 hours ago

      I feel the same but about the EU. After all, I have a choice whether to use Meta or not. There is no escaping the EU sort of leaving my current life.

      • maartenscholl 9 hours ago

        Meta famously tracks people extensively even if they don't have an account there, through a technique called shadow profiles.

    • rockemsockem 19 hours ago

      I mean, ideally no one would side blindly at all :D

      • js4ever 16 hours ago

        That's the issue with people's from a certain side of politics, they don't vote for something they always side / vote against something or someone ... Blindly. It's like pure hate going over reason. But it's ok they are the 'good' ones so they are always right and don't really need to think

        • amelius 15 hours ago

          Sometimes people are just too lazy to read an article. If you just gave one argument in favor of Meta, then perhaps that could have started a useful conversation.

          • bdangubic 15 hours ago

            Perhaps… if a sane person could find anything in favor of one of the most Evil corporations in the history of mankind…

throwpoaster 14 hours ago

EU is going to add popups to all the LLMs like they did all the websites. :(

  • user5534762135 5 hours ago

    The internet is riddled with popups and attention grabbing dark patterns, but the only one that's a problem is the one that actually lets you opt out of being tracked to death?

    • ryukoposting 2 hours ago

      ...yes? There are countless ways it could have been implemented that would have been more effective, and less irritating for billions of people. Force companies to respect the DNT header. Ta-daa, done. But that wouldn't have been profitable, so instead let's cook up a cottage industry of increasingly obnoxious consent banners.

  • gond 13 hours ago

    No, the EU did not do that.

    Companies did that and thoughtless website owners, small and large, who decided that it is better to collect arbitrary data, even if they have no capacity to convert it into information.

    The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.

    It was and is a blatant misuse. The website owners all have a choice: shift the responsibility from themselves to the users and bugger them with endless pop ups, collect the data and don’t give a shit about user experience. Or, just don’t use cookies for a change.

    And look which decision they all made.

    A few notable examples do exist: https://fabiensanglard.net/ No popups, no banner, nothing. He just don’t collect anything, thus, no need for a cookie banner.

    The mistake the EU made was to not foresee the madness used to make these decisions.

    I’ll give you that it was an ugly, ugly outcome. :(

    • wskinner 13 hours ago

      > The mistake the EU made was to not foresee the madness used to make these decisions.

      It's not madness, it's a totally predictable response, and all web users pay the price for the EC's lack of foresight every day. That they didn't foresee it should cause us to question their ability to foresee the downstream effects of all their other planned regulations.

      • gond 13 hours ago

        Interesting framing. If you continue this line of thought, it will end up in a philosophical argument about what kind of image of humanity one has. So your solution would be to always expect everybody to be the worst version of themselves? In that case, that will make for some quite restrictive laws, I guess.

        • wskinner 13 hours ago

          People are generally responsive to incentives. In this case, the GDPR required:

          1. Consent to be freely given, specific, informed and unambiguous and as easy to withdraw as to give 2. High penalties for failure to comply (€20 million or 4 % of worldwide annual turnover, whichever is higher)

          Compliance is tricky and mistakes are costly. A pop-up banner is the easiest off-the-shelf solution, and most site operators care about focusing on their actual business rather than compliance, so it's not surprising that they took this easy path.

          If your model of the world or "image of humanity" can't predict an outcome like this, then maybe it's wrong.

          • gond 12 hours ago

            > and most site operators care about focusing on their actual business rather than compliance,

            And that is exactly the point. Thank you. What is encoded as compliance in your example is actually the user experience. They off-loaded responsibility completely to the users. Compliance is identical to UX at this point, and they all know it. To modify your sentence: “and most site operators care about focusing on their actual business rather than user experience.”

            The other thing is a lack of differentiation. The high penalities you are talking about are for all but of the top traffic website. I agree, it would be insane to play the gamble of removing the banners in that league. But tell me: why has ever single-site- website of a restaurant, fishing club and retro gamer blog a cookie banner? For what reason? They won’t making a turnover you dream about in your example even if they would win the lottery, twice.

          • troupo 9 hours ago

            > Compliance is tricky

            How is "not selling user data to 2000+ 'partners'" tricky?

            > most site operators care about focusing on their actual business

            How is their business "send user's precise geolocation data to a third party that will keep that data for 10 years"?

            Compliance with GDPR is trivial in 99% of cases

    • lurking_swe 13 hours ago

      Well, you and I could have easily anticipated this outcome. So could regulators. For that reason alone…it’s stupid policy on their part imo.

      Writing policy is not supposed to be an exercise where you “will” a utopia into existence. Policy should consider current reality. if your policy just ends up inconveniencing 99% of users, what are we even doing lol?

      I don’t have all the answers. Maybe a carrot-and-stick approach could have helped? For example giving a one time tax break to any org that fully complies with the regulation? To limit abuse, you could restrict the tax break to companies with at least X number of EU customers.

      I’m sure there are other creative solutions as well. Or just implementing larger fines.

    • varenc 11 hours ago

      If the law incentivized practically every website to implement the law in the "wrong" way, then the law seems wrong and its implications weren't fully thought out.

    • eddythompson80 11 hours ago

      "If you have a dumb incentive system, you get dumb outcomes" - Charlie Munger

    • shagie 12 hours ago

      > The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.

      You are absolutely right... Here is the site on europa.eu (the EU version of .gov) that goes into how the GDPR works. https://commission.europa.eu/law/law-topic/data-protection/r...

      Right there... "This site uses cookies." Yes, it's a footer rather than a banner. There is no option to reject all cookies (you can accept all cookies or only "necessary" cookies).

      Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?

      • pelorat 42 minutes ago

        > Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?

        Well, it's a information-only website, it has no ads or even a login, so they don't need to use any cookies at all. In fact if you look at the page response in the browser dev tools, there's in fact no cookies on the website, so to be honest they should just delete the cookie banner.

    • constantcrying 3 hours ago

      But this is a failure on the part of the EU law makers. They did not understand how their laws would look in practice.

      Obviously some websites need to collect certain data and the EU provided a pathway for them to do that, user consent. It was essentially obvious that every site which wanted to collect data for some reason also could just ask for consent. If this wasn't intended by the EU it was obviously foreseeable.

      >The mistake the EU made was to not foresee the madness used to make these decisions.

      Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.

      What makes all these EU laws so insufferable isn't that they make certain things illegal, it is that they force everyone to adopt specific compliance processes, which often do exactly nothing to achieve the intended goal.

      User consent was the compliance path to be able to gather more user data. Not foreseeing that sites would just ask that consent was a failure of stupid bureaucrats.

      Of course they did not intend that sites would just show pop ups, but the law they created made this the most straightforward path for compliance.

      • gond an hour ago

        That possibly cannot be the common notion to frame this.

        I agree with some parts it but also see two significant issues:

        1. It is even statistically implausible that everyone working at the EU is tech-illiterate and stupid and everybody at HN is a body of enlightenment on two legs. This is a tech-heavy forum, but I would guess most here are bloody amateurs regarding theory and science of law and you need at least two disciplines at work here, probably more.

        This is drifting too quickly into a territory of critique by platitudes for the sake of criticism.

        2. The EU made an error of commission, not omission, and I think that that is a good thing. They need to make errors in order to learn from them and get better. Critique by using platitudes is not going to help the case. It is actually working against it. The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet. So, how should that work out? Exactly like this: we will be stuck for half an eternity and no one will correct anything because if you don’t do anything you can’t do any wrong! We as a society mostly record the things that someone did wrong but almost never record something somebody should have done but didn’t. That’s an error of omission, and is usually magnitudes more significant than an error of commission. What is needed is an alternative way of handling and judging errors. Otherwise, the path of learning by error will be blocked by populism.

        ——- In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed. The EU as a system needs to be accelerated by a margin so that it gets to an iterative approach if an error was made. I would argue with a cybernetic feedback loop approach here, but as we are on HN, this would translate to: move fast and break things.

        • rdm_blackhole a minute ago

          > Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.

          I am sorry but I too agree with OP's statement. The EU is full of technocrats who have no idea about tech and they get easily swayed by lobbies selling them on a dream that is completely untethered to the reality we live in.

          > The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet.

          You are talking as if someone is actually looking at the problem. is that so? Because if there was such a feedback loop that you seem to think exists in order to correct this issue, then where is it?

          > In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed.

          So we should not hold people accountable when they make mistakes and waste everyone's time then?

          There is plenty of evidence to show that the EU as a whole is incompetent when it comes to tech.

          Case and point the Chat control law that is being pushed despite every single expert warning of the dire consequences in terms of privacy, and setting a dangerous precedent. Yet, they keep pushing it because it is seen as a political win.

          If the EU knew something about tech they would know that placing back-doors in all communication applications is non starter.

        • constantcrying 22 minutes ago

          On point 1. Tech illiteracy is something that affects an organization, it is independent of whether some individuals in that organization understand the issues involved. I am not arguing that nobody at the EU understands technology, but that key people pushing forward certain pieces of legislation have a severe lack of technical background.

          On point 2. My argument is that the EU is fundamentally legislating wrong. The laws they create are extremely complex and very hard to decipher, even by large corporate law teams. The EU does not create laws which clearly outlaw certain behaviors, they create corridors of compliance, which legislate how corporations have to set up processes to allow for certain ends. This makes adhering to these laws extremely difficult, as you can not figure out if something you are trying to do is illegal. Instead you have to work backwards, start by what you want to do, then follow the law backwards and decipher the way bureaucrats want you to accomplish that thing.

          I do not particularly care about cookie banners. They are just an annoying thing. But they clearly demonstrate how the EU is thinking about legislation, not as strict rules, but as creating corridors. In the case of cookie banners the EU bureaucrats themselves did not understand that the corridor they created allowed basically anyone to still collect user data, if they got the user to click "accept".

          The EU creates corridors of compliance. These corridors often map very poorly onto the actual processes and often do little to solve the actual issues. The EU needs to stop seeing themselves as innovators, who create broad highly detailed regulations. They need to radically reform themselves and need to provide, clear and concise laws which guarantee basic adherence to the desired standards. Only then will their laws find social acceptance and will not be viewed as bureaucratic overreach.

  • lofaszvanitt 13 hours ago

    No popup is required, just every lobotomized idiot copies what the big players do....

    Oh ma dey have popups. We need dem too! Haha, we happy!

    • zdragnar 13 hours ago

      Actually, it's because marketing departments rely heavily on tracking cookies and pixels to be their job, as their job is measured on things like conversations and understanding how effective their ad spend is.

      The regulations came along, but nobody told marketing how to do their job without the cookies, so every business site keeps doing the same thing they were doing, but with a cookie banner that is hopefully obtrusive enough that users just click through it.

      • tjwebbnorfolk 13 hours ago

        No it's because I'll get fined by some bureaucrat who has never run a business in his life if I don't put a pointless popup on my stupid-simple shopify store.

        • jungturk 12 hours ago

          Is it an option for your simple store to not collect data about subjects without their consent? Seems like an easy win.

          Your choice to use frameworks subsidized by surveillance capitalism doesn't need to preclude my ability to agree to participate does it?

          Maybe a handy notification when I visit your store asking if I agree to participate would be a happy compromise?

      • conradludgate 7 hours ago

        It's important to point out that it's actually not at all about cookies. It's tracking by using information stored on the user's device in general that needs to have consent.

        You could use localStorage for the purposes of tracking and it still needs to have a popup/banner.

        An authentication cookie does not need a cookie banner, but if you issue lots of network requests for tracking and monitor server logs, that does now need a cookie banner.

        If you don't store anything, but use fingerprinting, that is not covered by the law but could be covered by GDPR afaiu

  • baby 13 hours ago

    I hate these popups so much, the fact that they havent corrected any of this bs shows how slow these people are to move

    • tim333 3 hours ago

      The "I still don't care about cookies" extension works quite well. Auto-clicks accept and closes the window in approx half a second.

sorokod 20 hours ago

Presumably it is Meta's growth they have in mind.

Edit: from the linked in post, Meta is concerned about the growth of European companies:

"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."

  • t0mas88 16 hours ago

    Sure, but Meta saying "We share concerns raised by these businesses" translates to: It is in our and only our benefit for PR reasons to agree with someone, we don't care who they are, we don't give a fuck, but just this second it sounds great to use them for our lobbying.

    Meta has never done and will never do anything in the general public's interest. All they care about is harvesting more data to sell more ads.

  • isodev 20 hours ago

    Of course. Skimming over the AI Code of Practice, there is nothing particularly unexpected or qualifying as “overreach”. Of course, to be compliant, model providers can’t be shady which perhaps conflicts with Meta’s general way of work.

rchaud 19 hours ago

Kaplan's LinkedIn post says absolutely nothing about what is objectionable about the policy. I'm inclined to think "growth-stunting" could mean anything as tame as mandating user opt-in for new features as opposed to the "opt-out" that's popular among US companies.

  • j_maffe 15 hours ago

    It's always the go to excuse against any regulation.

jleyank 15 hours ago

I hope this isn't coming down to an argument of "AI can't advance if there are rules". Things like copyright, protection of the sources of information, etc.

chvid 20 hours ago

Why does meta need to sign anything? I thought the EU made laws that anyone operating in the EU including meta had to comply to.

  • AIPedant 20 hours ago

    It's not a law, it's a voluntary code of conduct given heft by EU endorsement.

    • FirmwareBurner 20 hours ago

      > it's a voluntary code of conduct

      So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.

      If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.

      "Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.

      • AIPedant 19 hours ago

        No, it's a voluntary code of conduct so AI providers can start implementing changes before the conduct becomes a legal requirement, and so the code itself can be updated in the face of reality before legislators have to finalize anything. The EU does not have foresight into what reasonable laws should look like, they are nervous about unintended consequences, and they do not want to drive good-faith organizations away, they are trying to do this correctly.

        This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.

    • hopelite 15 hours ago

      “Heft of EU endorsement.” It’s amazing how Europeans have simply acquiesced to an illegitimate EU imitation government simply saying, “We dictate your life now!”.

      European aristocrats just decided that you shall now be subjects again and Europeans said ok. It’s kind of astonishing how easy it was, and most Europeans I met almost violently reject that notion in spite of the fact that it’s exactly what happened as they still haven’t even really gotten an understanding for just how much Brussels is stuffing them.

      In a legitimate system it would need to be up to each sovereign state to decide something like that, but in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.

      • sameermanek 4 hours ago

        Honestly, US is really not in a good shape to support your argument.

        If aristocratic figures had so much power in EU, they wouldnt be fleeing from the union.

        In reality, US is plagued with greed, scams, mafias in all sectors, human rights violations and a economy thats like a house of cards. In contrast, you feel human when you're in EU. You have voice, rights and common sense!

        It definitely has its flaws, but atleast the presidents there are not rug pulling their own citizens and giving pardons to crypto scammers.. Right?

      • aosaigh 12 hours ago

        You don’t understand the fundamental structure of the EU

      • RandomThoughts3 15 hours ago

        > in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.

        I am happy to inform you that the EU actually works according to treaties which basically cover every point of a constitution and has a full set of courts of law ensuring the parliament and the European executive respect said treaties and allowing European citizens to defend their interests in case of overreach.

        > European aristocrats just decided

        I am happy to inform you that the European Union has a democratically elected parliament voting its laws and that the head of commission is appointed by democratically elected heads of states and commissioners are confirmed by said parliament.

        If you still need help with any other basic fact about the European Union don’t hesitate to ask.

mediumsmart 9 hours ago

Meta knows all there is about overreach and of course they don’t want that stunted.

chrisweekly 10 hours ago

Nit: (possibly cnbc's fault) there should be a hyphen to clarify meta opposes overreach, not growth. "growth-stunting overreach" vs "growth (stunting overreach)"

isodev 11 hours ago

As a citizen I’m perfectly happy with the AI Act. As a “person in tech”, the kind of growth being “stunt” here shouldn’t be happening in the first place. It’s not overreach to put some guardrails and protect humans from the overreaching ideas of the techbro elite.

  • tim333 3 hours ago

    A problem with the EU over regulating from its citizens point of view is the AI companies will set up elsewhere and the EU will become a backwater.

  • Aeolun 11 hours ago

    As a techbro elite. I find it incredibly annoying when people regulate shit that ‘could’ be used for something bad (and many good things), instead of regulating someone actually using it for something bad.

    • isodev 11 hours ago

      You’re too focused on the “regulate” part. It’s a lot easier to see it as a framework. It spells out what you need to anticipate the spirit of the law and what’s considerate good or bad practice.

      If you actually read it, you will also realise it’s entirely comprised of “common sense”. Like, you wouldn’t want to do the stuff it says are not to be done anyway. Remember, corps can’t be trusted because they have a business to run. So that’s why when humans can be exposed to risky AI applications, the EU says the model provider needs to be transparent and demonstrate they’re capable of operating a model safely.

thrance 3 hours ago

EU-wide ban of Meta incoming? I'd celebrate personally, Meta and their products are a net negative on society, and only serve to pump money to the other side of the Atlantic, to a nation that has shown outright hostility to European values as of late.

paulddraper 20 hours ago

Interesting because OpenAI committed to signing

https://openai.com/global-affairs/eu-code-of-practice/

  • nozzlegear 20 hours ago

    The biggest player in the industry welcomes regulation, in hopes it’ll pull the ladder up behind them that much further. A tale as old as red tape.

    • MPSFounder 20 hours ago

      [flagged]

      • nozzlegear 17 hours ago

        > Let us not fool ourselves. There are those online who seek to defend a master that could care less about them. Fascinating.

        How could you possibly infer what I said as a defense of Meta rather than an indictment of OpenAI?

        Fascinating.

      • zamadatix 20 hours ago

        Meta isn't actually an AI company, as much as they'd like you to think they are now. They don't mind if nobody comes out as the big central leader in the space, they even release the weights for their models.

        Ask Meta to sign something about voluntarily restricting ad data or something and you'll get your same result there.

  • bboygravity 16 hours ago

    Yeah well OpenAI also committed to being open.

    Why does anybody believe ANYthing OpenAI states?!

  • jahewson 20 hours ago

    Sam has been very pro-regulation for a while now. Remember his “please regulate me” world tour?

  • nkmnz 20 hours ago

    OpenAI does direct business with government bodies. Not sure about Meta.

    • somenameforme 20 hours ago

      About 2 weeks ago OpenAI won a $200 million contract with the Defense Department. That's after partnering with Anduril for quote "national security missions." And all that is after the military enlisted OpenAI's "Chief Product Officer" and sent him straight to Lt. Colonel to work in a collaborative role directly with the military.

      And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.

zeptonix 20 hours ago

Good. As Elon says, the only thing the EU does export is regulation. Same geniuses that make us click 5 cookie pop-ups every webpage

  • McAlpine5892 7 hours ago

    People complain more about cookie banners than they do the actual invasive tracking by those cookies.

    Those banners suck and I wouldn't mind if the EU rolled back that law and tried another approach. At the same time, it's fairly easy to add an extension to your browser that hides them.

    Legislation won't always work. It's complex and human behavior is somewhat unpredictable. We've let tech run rampant up to this point - it's going to take some time to figure out how to best control them. Throwing up our hands because it's hard to protect consumers from power multi-national corporations is a pretty silly position imo.

    • seydor 4 hours ago

      > than they do the actual invasive tracking by those cookies.

      maybe people have rationally compared the harm done by those two

      • Barrin92 3 hours ago

        can you expand on what sort of rationality would lead a person to consider an at worst annoying pop-up to be more dangerous than data exfiltration to companies and governments that are already acting in adversarial ways? The US government is already using people's social media profiles against them, under the Cloud act any US company can be compelled to hand data over to the government, as Microsoft just testified in France. That's less dangerous than an info pop up?

        Of course it has nothing to do with rationality. They're mad at the first thing they see, akin to the smoker who blames the regulators when he has to look at a picture of a rotten lung on a pack of cigarettes

        • seydor 3 hours ago

          gdpr doesn't stop governments. governments are already spying without permission and they exploit stolen data all the time. so yes, the cost of gdpr compliances including popups is higher than the imperceptible cost of tracked advertising.

          • Barrin92 2 hours ago

            For one that is objectively incorrect. GDPR prevents a whole host of data collection outright, shifts the burden for corporations to collecting the minimal amount of data possible, and gives you the right to explicitly consent into what data can be collected.

            Being angry at a popup that merely makes transparent, what a company tries to collect from you, and giving you the explicit option to say no to that, is just infantile. It basically amounts to saying that you don't want to think about how companies are exploiting your data, and that you're a sort of internet browsing zombie. That is certainly a lot of things, but it isn't rational.

  • cenamus 20 hours ago

    They didn't give us that. Mostly non-compliant websites gave us that.

    • dmix 16 hours ago

      The the entire ad industry moved to fingerprinting, mobile ad kits, and 3rd party authentication login systems so it made zero difference even if they did comply. Google and Meta aren't worried about cookies when they have JS on every single website but it burdens every website user.

      • mpeg 10 hours ago

        This is not correct, the regulation has nothing to do with cookies as the storage method, and everything to do with what kind of data is being collected and used to track people.

        Meta is hardly at blame here, it is the site owners that choose to add meta tracking code to their site and therefore have to disclose it and opt-in the user via "cookie banners"

    • myaccountonhn 3 hours ago

      This thread is people going "EU made me either choose to tell you that I spy on you or stop spying on you, now I need to tell everyone I spy on them, fucking EU".

    • spongebobstoes 19 hours ago

      that's deflecting responsibility. it's important to care about the actual effects of decisions, not hide behind the best case scenario. especially for governments.

      in this case, it is clear that the EU policy resulted in cookie banners

  • t0mas88 16 hours ago

    Elon is an idiot.

    If he disagrees with EU values so much, he should just stay out of the EU market. It's a free world, nobody forced him to sell cars in the EU.

  • saubeidl 16 hours ago

    Trump literally started a trade war because the EU exports more to the US than vice versa.

    • tim333 3 hours ago

      He also did the war thing on the UK which imports more from the US than it exports. He just likes trade wars I think.

sandspar 14 hours ago

Meta on the warpath, Europe falls further behind. Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.

  • Ylpertnodi 9 hours ago

    > Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.

    You talking about Zuckerberg?

    • sandspar 9 hours ago

      Yeah. He just settled the Cambridge Analytica suit a couple days ago, he basically won the Canadian online news thing, he's blown billions of dollars on his AI angle. He's jacked up and wants to fight someone.

bilekas 16 hours ago

> It aims to improve transparency and safety surrounding the technology

Really it does, especially with some technology run by so few which is changing things so fast..

> Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth

God forbid critical things and impactful tech like this be created with a measured head, instead of this nonsense mantra of "Move fast and break things"

Id really prefer NOT to break at least what semblance of society social media hasn't already broken.

SanjayMehta 8 hours ago

Not a big fan of this company or its founder but this is the right move.

The EU is getting to be a bigger nuisance than they are worth.

paul7986 20 hours ago

The US, China and others are sprinting and thus spiraling towards the majority of society's destitution unless we force these billionaires hands; figure out how we will eat and sustain our economies where one person is now doing a white or blue (Amazon warehouse robots) collar job that ten use to do.

brainzap 15 hours ago

the Meta that uses advertising tooling for propaganda and elected trump?

justlikereddit 16 hours ago

The more I read of the existing rule sets within the eurozone the less surprised I am that they make additional shit tier acts like this.

What do surprise me is anything at all working with the existing rulesets, Effectively no one have technical competence and the main purpose of legislation seems to add mostly meaningless but parentally formulated complexities in order to justify hiring more bureaucrats.

>How to live in Europe >1. Have a job that does not need state approval or licensing. >2. Ignore all laws, they are too verbose and too technically complex to enforce properly anyway.

  • randomNumber7 7 hours ago

    I think you can only happily live in Europe if you are employed by the state and like all the regulations.

lvl155 20 hours ago

I have a strong aversion to Meta and Zuck but EU is pretty tone-deaf. Everything they do reeks of political and anti-American tech undertone.

  • zeptonix 20 hours ago

    They're career regulators

constantcrying 7 hours ago

The problem with the EU regulation is the same as always, first and foremost they do not understand the topic and can not articulate a clear statement of law.

They create mountains of regulations, which are totally unclear and which require armies of lawyers to interpret. Adherence to these regulations becomes a major risk factor for all involved companies, which then try to avoid interacting with that regulation at all.

Getting involved with the GDPR is a total nightmare, even if you want to respect your users privacy.

Regulating AI like this is especially idiotic, since currently every year shows a major shift in how AI is utilized. It is totally out in the open how hard training an AI "from scratch" will be in 5 years. The EU is incapable of actually writing laws which make it clear what isn't allowed, instead they are creating vague corridors how companies should arrive at certain outcomes.

The bureaucrats see themselves as the innovators here. They aren't trying to make laws which prevent abuses, they are creating corridors for processes for companies to follow. In the case of AI these corridors will seem ridiculous in five years.

renewiltord 20 hours ago

[flagged]

  • edhelas 20 hours ago

    Sent from an iPhone probably having USB-C because of the EU.

    • ars 20 hours ago

      Just because they occasionally (and even frequently) do good thing, does not mean that overall their policies don't harm them own economies.

      • myko 13 hours ago

        The economy does not exist in a vacuum. Making number go up isn't the end goal, it is to improve citizens lives and society as a whole. Everything is a tradeoff.

    • renewiltord 20 hours ago

      I charge my phone wirelessly. The presence of a port isn't a positive for me. It's just a hole I could do without. The shape of the hole isn't important.

      Besides, I posted from my laptop.

    • zeptonix 20 hours ago

      [flagged]

      • saubeidl 16 hours ago

        Please don't use ableist language.

brap 16 hours ago

[flagged]

  • elliotec 15 hours ago

    Europe is the world‘s second largest economy and has the world‘s highest standard of living. I’m far from a fan of regulation but they’re doing a lot of things right by most measures. Irrelevancy is unlikely in their near future.

vicnov 20 hours ago

Just like GDPR, it will tremendously benefit big corporations (even if Meta is resistant) and those who are happy NOT to follow regulations (which is a lot of Chinese startups).

And consumers will bear the brunt.