- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
- AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
- AI that uses biometrics to infer a person’s characteristics
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
All of the above can be achieved with just software, statistics, old ML techniques, i.e. 'non hype' AI kind of software.
I am not familiar with the detail of the EU AI pact but it seems like the article is simplifying important details.
I assume the ban is on the purpose/usage rather than whatever technology is used under the hood, right?
For the purposes of this Regulation, the following definitions apply:
(1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; Related: Recital 12
https://artificialintelligenceact.eu/article/3/ https://artificialintelligenceact.eu/recital/12/ So, it seems like yes, software, if it is non-deterministic enough would qualify. My impression is that software that simply takes "if your income is below this threshold, we deny you a credit card." Would be fine, but somewhere along the line when your decision tree grows large enough, it probably changes.
https://uk.practicallaw.thomsonreuters.com/Glossary/UKPracti... describes a bit of how recitals interact with the operating law; they're explicitly used for disambiguation.
So your hip new AI startup that's actually just hand-written regexes under the hood is likely safe for now!
(Not a lawyer, this is neither legal advice nor startup advice.)
We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
The distinction is accountability. Determining whether a human decided the outcome, or it was decided by an obscure black box where data is algebraically twisted and turned in a way no human can fully predict today.
Legally that accountability makes all the difference. It's why companies scurry to use AI for all the crap they want to wash their hands of. "Unacceptable risk AI" will probably simply mean "AI where no human accepted the risk", and with it the legal repercussions for the AI's output.
In reality, we will wait until someone violates the obvious spirit of this so egregiously and ignore multiple warnings to that end and wind up in court (a la the GDPR suits). This seems pretty clear.
If you use Copilot to generate code by essentially just letting it autocomplete the entire code base with little supervision, yeah, sure, that might maybe fall under this law somehow.
If you use Copilot like you would use autocomplete, i.e. by letting it fill in some sections but making step-by-step decisions about whether the code reflects your intent or not, it's not functionally different from having written that code by hand as far as this law is concerned.
But looking at these two options, nobody actually does the first one and then just leaves it at that. Letting an LLM generate code and then shipping it without having a human first reason about and verify it is not by itself a useful or complete process. It's far more likely this is just a part of a process that uses acceptance tests to verify the code and then feeds the results back into the system to generate new code and so on. But if you include this context, it's pretty obvious that this indeed would describe an "AI system" and the fact there's generated code involved is just a red herring.
So no, your gotcha doesn't work. You didn't find a loophole (or anti-loophole?) that brings down the entire legal system.
That's every AI system. It follows the rules defined solely by the programmers (who I suppose might sometimes stretch the definition of natural persons) who made pytorch or whatever framework.
Just rerun the application with higher income until you get a pass. Then tell the person their application was rejected because income was not at least whatever that passing income amount was.
Maybe also vary some other inputs to see if it is possible to get a pass without raising income as much, and add to the explanation that they could lower the income needed by say getting a higher credit score or lowering your outstanding debt or not changing jobs as often or whatever.
It is simply that, eg., "on some historical dataset, this boundary most relibaly predicted default" -- but this confers no normative reason to accept or reject any individual application (cf. the ecological fallacy). And so, in a very literal way, there is no normative reason the operator of this model has in accepting/rejecting any individual application.
But banks, at least in my country (central EU), don't have to explain their reasons for rejecting a mortgage application. So why would their automated systems have to?
There is so called three line system -- operational line does the actual thing (approves or rejects the mortgage), the second line gives the operational line the manual to do so the right way, the internal audit should keep an eye that whatever the first line is doing is actually what the policy says they should be doing.
It's entirely plausable that operational line is an actual LLM which is trained on a policy that the compliance department drafted and the audit department occasionally checks the outputs of the modal against the policy.
But at this point it's much easier to use LLM to write deterministic function in your favorite lisp based on the policy and run that to make decisions.
[1] https://en.wikipedia.org/wiki/Equal_Credit_Opportunity_Act#R...
[2] https://www.nolo.com/legal-encyclopedia/do-lenders-have-to-t...
Why do you need an AI if what you are doing is "if X < N" ?
For someone with a great credit history, lots of assets, a long term job in a stable position, and low debt they might be approved with a lower income than someone with a poor credit history whose income comes from a job in a volatile field.
There might be some absolute requirements, such as the person have a certain minimum income independent of all those other factors, and they they have a certain minimum credit score, and so on. If the application is rejected because it doesn't meet one of those then sure, you can just do a simple check and report that.
But most applications will be above the absolute minimums in all parameters and the rejection is because some more complicated function of all the criteria didn't meet the requirements.
But you can't just tell the person "We put all your numbers into this black box and it said 'no'. You have to give them specific reasons their application was rejected.
Say a lender has used machine learning to train some sort of black box to take in loan applications and respond with an approve/reject response. If they reject an application using that the Equal Credit Opportunity Act in the US require that they tell the applicant a specific reason for the rejection. They can't just say "our machine learning model said no".
If there were not using any kind of machine learning system they probably would have made the decision according to some series of rules, like "modified income must be X times the monthly payment on the loan", where modified income is the person's monthly income with adjustments for various things. Adjustments might be multipliers based on credit score, debt, and other things.
With that kind of system they would be able to tell you specifically why your were rejected. Say you need a modified income of $75k and you are a little short. They could look at their rules and figure out that you could get a modified income of $75k if you raised your income by a specific amount or lowered your debt by a specific amount, or by some combination of those.
That kind of feedback is useful to the application. It tells them specific things they can do to improve their chances.
With the company using a machine learning black box they don't know the rules that the machine has learned. Hence my suggestion of asking the black box what-if scenarios to figure out specific things the applicant can change to get approval.
In that sense it's very practical, but it kicks the can down the road. Maybe the thing has a hidden parameter that represents the risk of the applicant being fired, which increases the threshold by 5% if the salary is a round number. Or it is more likely to reject everyone with an income between 73 and 75k because it learned this is a proxy to a parameter you are explicitly forbidden to have.
Let's just say it doesn't have a discontinuity, and actually produces the threshold which is deterministically compared with your income. How does it come up with this threshold? You may not be required to disclosed that to the applicant, but it would be a shame if people will figure out that the threshold is consistently higher for a certain population (for example people who's given name ends with a vowel).
It's fairly reasonable to for a regulator to ask you to demonstrate it doesn't do any of this things.
The obvious straightforward read is along the lines of: imagine you make some software, which then does something bad, and you end up in court defending yourself with an argument along the lines of, "I didn't explicitly make it do it, this behavior was a possible outcome (i.e. not a bug) but wasn't something we intended or could've reasonably predicted" -- if that argument has a chance of holding water, then the system in question does not fall under the exception your quoted.
The overall point seems to be to make sure systems that can cause harm always have humans that can be held accountable. Software where it's possible to trace the bad outcome back to specific decisions made by specific people who should've known better is OK. Software that's adaptive to the point it can do harm "on its own" and leaves no one but "the system" to blame is not allowed in those applications.
Two different machines can be designed for the same use case, but the possible bad outcomes in either "correct" use or malicious use of the two machines can be very different. So it is reasonable to ban the one which has unacceptable bad outcomes.
For example, while both a bicycle and a dirt bike are mobility vehicles, a park may allow one and ban the other.
It would seem accountable would only be higher in systems where humans were not part of the decision making process.
However for those that might not be purely 1984 inspired, I do think that we need to have legislation that is capable of making the distinction between : - algorithms that can be reasoned about and analysed - "AI" systems that resist such analysis
The main issue is around responsability. Who would be held responsible for illegal (discriminatory) biases in an AI systems ? How would regulators even detect, specify and quantify those biases ?
In non-AI systems, we can analyse the algorithm and evaluate if the biases are due to errors (negligence) or are by design (malice / large scale criminality)
So if an AI can't change its weights after deployment, it's not really an AI? That doesn't make sense.
As for the other criteria, they're so vague I think a thermostat might apply.
A learning thermostat would apply, say one that uses historical records to predict changes in temperature and preemptively adjusts. And it would be low risk and unregulated in most cases. But attach to a self-heating crib or premature baby incubator and that would jump to high risk and you might have to prove it is safe.
As long as the thermostat doesn't control people's lives, that's fine.
Quite.
One wonders if the people who came up with this have any actual understanding of the technology they're attempting to regulate.
We already have ways to predict avalanche risk that are well understood and explainable. There should be a high threshold on replacing that.
The precise language on high risk is here [1], but some enumerations are placed in the annex, which (!!!) can be amended by the commission, if I am not completely mistaken. So this is very much a dynamic regulation.
Just joking, but I think it is a funny parallel. Also because of it being probably solely human made rules.
yes, and with the same problems if applied to the same use cases in the same way
in turn they get regulated, too
it would be strange to limited a law to some specific technical implementation, this isn't some let's fight the hype regulation but a serious long term effort to regulate automatized decision making and classification processes which pose a increased or high risk to society
To me it’s just generative AI, LLMs, media generation. But I see the CNN folks suddenly getting “AI” attention. Anything deep learning really. It’s pretty weird. Even our old batch processing, SLURM based clusters with GPU nodes are now “AI Factories”.
That's not what AI is.
Artificial Intelligence has decades of use in academia. Even a script which plays Tic Tac Toe is AI. LLMs have advanced the field profoundly and gained widespread use. But that doesn't mean that a Tic Tac Toe bot is no longer AI.
When a term passes to the mainstream people manufacture their own idea of what it means. This has happened to the term "hacker". But that doesn't mean decades of AI papers are wrong because the public uses a different definition.
It's similar to the professional vs the public understanding of the term "prop" in movie making. People were criticizing Alec Baldwin for using a real gun on the set of Rust instead of a "prop" gun. But as movie professionals explained, a real gun is a prop gun. Prop in theater/movies just means property. It's anything that's used in the production. Prop guns can be plastic replicas, real guns which have been disabled, or actually firing guns. Just because the public thinks "prop" means "fake", doesn't mean movie makers have to change their terms.
At least that's what we used to do.
Btw, you bring up the perspective of realising that our tools weren't adequate. But it's broader: completely ignoring the tools, we also realise that eg being able to play eg chess really, really well didn't actually capture what we wanted to mean by 'intelligence'. Similar for other outcomes.
This is not about data collection (GDPR already takes care of that), but about AI-based categorization and identification.
"AI system" and other terms are defined in article 3: https://artificialintelligenceact.eu/article/3/
Trying to define it for scope was IMHO a mistake.
Their deep meaning is "we don't want machines to make decisions". A key point for them has always been "explainability".
GDPR has a provision about "profiling" and "automated decision making" for key aspects of life. E.g. if you ask for a mortgage (pretty important life changing/affecting decision) and the bank rejects it you a) can ask them "why" and they MUST explain, in writing, and b) if the decision was made in a system that was fed your data (demographic & financial) you can request that a Human to repeat the 'calculations'.
Good luck having ChatGTP explaining.
They are trying to avoid having the dystopian nightmare of the (apologies - I don't mean to disrespect the dead, I mean to disrespect the industry) Insurance & Healthcare in the US, where a system gets to decide 'your claim is denied' against humans' (doctors in this case)(sometimes imperfect) consultations because one parameter writes "make X amount of profit above all else" (perhaps not coded with this precise parameter but somehow else).
Now, understanding the (personal) data collection and send to companies in the US (or other countries) that don't fall under the Adequacy Decisions [0] and combining that with the aforementioned (decision-making) risks, using LLMs in Production is 'very risky'.
Using Copilot for writing code is very much different because there the control of "converting the code to binaries, and moving said binaries to Prod env." (they used to call them Librarians back in the day...), so Human Intervention is required to do code review, code test, etc (just in case SkyNet wrote code to export the data 'back home' to OpenAI, xAI, or any other AI company it came from).
I haven't read the regulation lately/in its final text (I contributed and commented some when it was still being drafted), and/but I remember the discussions on the matter.
[0]: https://commission.europa.eu/law/law-topic/data-protection/i...
EDIT: ultimately we want humans to have the final word, not machines.
They will interpret "predict" as merely "report" or "act on".
This is terrible.
I would really love to see a Q&A thread like https://news.ycombinator.com/item?id=42770125 from someone who's actually read the documents, practices law in the area, and also understands the difference between US and EU law.
https://outofthecomfortzone.frantzmiccoli.com/thoughts/2024/... and here is my shameless plug.
Your comparison to GDPR seems to be correct in a way, both are quite vague and wide. The implementation of GDPR is still unclear in certain situations and it was even worse when it was launched, the EU AI act have very little references to work with and except for very obvious area it is still a lot of a guesswork
I WANT it to be difficult for AI companies to steal other people’s hard work just like I WANT Facebook to have to spend millions of dollars on lawyers to make sure whatever data they’re collecting and sharing about me doesn’t violate my rights.
- Nothing has changed in Facebook and Google data collection practices, who with other bug corps account for > 90% of data collection
- Many mid tier competitors lost market share, focusing power to Google
- EU small software companies pay estimated extra 400 EUR/year to satisfy GDPR compliance with little tangible benefits to the EU citizens.
It's called unintended consequences. We all want Zuckerberg to collect less data, but how GDPR was implemented is that it mostly hurt small businesses disproportionately. E.g. you now need to hire a lawyer to analyse if you can collect an IP address and for what purposes, as discussed here.
> The main burden falls on SMEs, which experienced an average decline in profits of 8.5 percent. In the IT sector, profits of small firms fell by 12.5 percent on average. Large firms, too, are affected, with profits declining by 7.9 percent on average. Curiously, large firms in the IT sector saw the smallest decline in profits, of “only” 4.6 percent. Specifically, the authors find “no significant impacts on large tech companies, like Facebook, Apple and Google, on either profits or sales,” putting to bed the myth that U.S. technology firms are the enemy of regulation because it hits their bottom lines.
https://datainnovation.org/2022/04/a-new-study-lays-bare-the...
> I'm old enough to remember when everyone claimed EU tech law was about to ban memes, which didn't happen...
AFAIK those parts of that law was changed somewhat
[citation needed]
This is just laughably incorrect. Literally every Fortune 500 that I work with who has operations in Europe has an entire team that owns GDPR compliance. It is one of the most successful projects to curtail businesses treating private data like poker chips since HIPAA.
Anyways, GDPR doesn't protect your data, it just specifies how companies can use it. So all my name, address, phone number, etc. will still be stored by every webshop for 10 years or so just waiting to be breached (because some tax laws).
Facebook and Google got sued, paid fines, and changed their behavior. I can do an easy export of all of my FB and G data, thanks to the GDPR.
"EU small software companies pay estimated extra 400 EUR/year to satisfy GDPR compliance"
WTF? no! I work with several small companies and it's super easy to just NOT store anyone's birthday (why would you need that for e-commerce?) and to anonymize IPs (Google provides a plugin for GA). And, basically, that's it. Right now, I can't even find an example of how the GDPR has created any costs. It's more like people changed their behavior and procedures once GDPR was announced and that's "good enough" to comply.
At 40k EUR / year in salary, that's about 1.6 hours a month dealing with GDPR. That sounds about right; it's like 5 hours a quarter deploying anonymizers or updating code to export the data you have on people. I honestly expected it to be higher; I would have thought it was in the realm of 40 hours a quarter just doing mundane things. Auditing to make sure PII didn't sneak in somewhere, updating anonymizer code/deployments and reviewing the same.
On the other hand, if you're concerned about AI risk, I don't see how it could be otherwise. We don't have a clear grasp about what the real limits of capabilities are. Some people are promising "AGI" "just around the corner". Other people are spinning tales about gray goo. The risk of automated discrimination looms large since IBM sold Hollerith collation machines to the Holocaust.
If it delays AI "innovation" by forcing only the deployment of solutions which have had at least some check by legal to at least try to avoid harming citizens, that's ... good?
How is the gdpr vague?
https://gdpr.eu/eu-gdpr-personal-data/
They are explicitly listed as example of PII.
Moreover, to reason about this, one also needs to take into account Art 6.2 which means there might be an additional 27 laws you need to find and understand.
Note, however, that recital 30 which you quoted is explicitly NOT referenced by Art. 6, at least according to this inofficial site: https://gdpr-info.eu/art-6-gdpr/
This particular case might be solved through hashing, but then there are only 4.2bn IPs so easy to try out all hashes. Or maybe it's only OK with IPv6?
I find this vague or at least hard to reconcile with technical everyday reality, and doing it well can take enormous amounts of time and money that are not spent on advancing anything of value.
There are rulings that access providers are/were allowed to save full IP addresses for up to 7 days to handle misuse of services etc. and any longer storage seems unnecessary and unlawful.
In other cases there were recommendations of up to 30 days, ideally with anonymized addresses where the last one or two triplets are automatically being removed. I've also seen 30 days as kind of the default setting for automatic log purging with shared webhosters.
Our lawyer told us that he estimates that saving full IP addresses for 14 days in logfiles would be fine in regards of preventing/tracking misuse of services or attacks against the infrastructure.
If this would ever come to court it would most probably be up to the judge to see whether this is really fine or already too much. Therefore we had to document the process and why we think 14 days is reasonable and so on.
The GDPR lacks a specific time frame and I think that's okay. There's always some "wiggle room" in European laws, it's about not misusing that room and sincerely acting in the best interest of everybody.
In addition to the other answers, I want to point out that recital 49 says that it is possible under legitimate interest (6(1)f).
If only I had known this in my last corporate role where this discussion alone cost us weeks :/
No, it doesn't. Subsections b, c, and f roughly cover this. On top of that, no one is going to come at you with fines for doing regular business things as long as you don't store this data indefinitely long, sell it to third parties, or use it for tracking. As laid out in Article 1.1.
On top of that, for many businesses existing laws override GDPR. E.g. banks have to keep personal records around for many years.
Sounds vague to me, which was the original point.
That being said: it is extremely strict, a lot of lawyers like to make it stricter (because for them it means safer) and a lot of lawyers have to back of under business constraint (that push to sometimes got below legal requirements). My experience is that no two companies have the same understanding of GDPR.
Disclaimer: i am advising a company that sells AI act related compliance tooling
> AI that tries to infer people’s emotions at work or school
I wonder how broadly this will be construed. For example if an agent uses CoT and they needs emotional state as part of that, can it be used in a work or school setting at all?
So, this targets the use case of a third party using AI to detect the emotional state of a person.
Do I want cameras all over society tracking emotions? Probably not. But there’s a baby somewhere in that bath water.
Then I started thinking how this could be used in restaurants to see if waiters smile to the people they are serving Or in customer service (you can actually hear it when people smile on the phone)
Then I realised that this kind of tech would definitely lead to abuse
(btw that's not the reason I didn't build it, it was just not that easy to build)
Not sure if that’s solved
Yes. This is how you know that all the people screaming about the EU overregulating and how the EU will miss all that AI innovation haven't even bothered to Google or ask their preferred LLM about the legislation. It's mostly just common sense to avoid EU citizens having their rights or lives decided by blackbox algorithms nobody can explain, be it in a Post Office (UK) scandal style, or US healthcare style.
It might well be a useful tool to point at yourself.
It's an entirely inappropriate one to point at someone else. If you can't imagine having someone estimate your emotional state (usually incorrectly), and use that as a basis to disregard your opinion, you've lived a very different life to mine. Don't let them hide behind "the AI agreed with my assessment".
The regulation explicitly provides an exception for medical reasons:
Article 5:
1. The following AI practices shall be prohibited:
[...]
(f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
If I must intact with an AI for this, I'd prefer that it infer my emotions correctly.
The "business investors" and "innovators" can take this kind of business elsewhere.
This kind of talk where regulators are assaulted by free marketeers and freedom fighters is unacceptable here.
Let us not misinterpret business people as "innovators", if what they do is not net positive for the society, they do not belong here.
[1] https://www.europarl.europa.eu/news/fr/press-room/20240308IP...
The understanding is that interpreting laws leads to bias, partiality, and injustice; while following the letter of the law equally in each situation is the most just approach.
I lived in Lithuania for a while and at the time, there was a big national debate about how “family” should be defined in laws — what people it can and can’t include.
So yes — a lot of emphasis is put on verbose definitions in literalist legal texts. And very very verbose explanations of many edge cases, too,
I know first hand it will be very hard to read Lithuanian legal texts for someone who is not a native speaker of the language, and even for natives it’s a challenge. So you could instead google “literalist legal systems”, and I believe you’ll find at least some examples/more context in English somewhere.
It's also quite clear that places without strong privacy protections like the US are developing into dystopian hellscapes.
Early adopters signed contracts with companies that provided shitty WiFi at high prices for a long time. A $500 hotel would have $30/night connections that were slow, while the Courtyard Marriott had it for free.
You can't have nice things, but on the bright side Google/Apple/Facebook won't know what you had for dinner.
Now give us your whole financial transaction and travel history, so we can share it with the US, a hostile country, citizen!
Nevermind the fact that you obviously come from a previliged position if you think that money is all that's important. You're blinded.
Then there's the nontrivial number of especially local US news sources which now give me a cheerful "451 Unavailable For Legal Reasons" error code.
Then there's the outright stupid stuff - like lightbulbs that do not cost 15 euros a piece (to save 'energy'), or drinking straws that do not dissolve in my coke within the first minute (to avoid 'disposable plastics'). There are hundreds of examples like that.
The EU is a regulation juggernaut, and is making the world an actively worse place for everyone globally. See "Cookie Banners".
So the EU should not control where your data is processed? You can't claim in one comment to be bummed about data exchanges between the EU and the US (which you do), and then not understand why there are regulations in place that are slowing down the roll-out of things like Apple Intelligence, for your benefit.
1. I am giving my data freely and because of my own decision to an organization I trust and
2. The state is taking my data by force of law to share it with an inherently untrustworthy organization.
I understood he was referring to incandescent light bulbs, which have been largely regulated out of the market. So you now need to get an "Edison light bulb" which circumvenes regulation but costs significantly more.
The problem here is that they tell you: - We're building renewables which deliver super clean and cheap energy - CO2 is the root of all evil
Then, they force you to use an LED which uses less energy (which is clean and cheap, no?) but contains a lot more chemicals and rare earths, so in a number of ways seems less "environmentally friendly". This seems contradictory and half-assed.
https://en.wikipedia.org/wiki/Phase-out_of_incandescent_ligh...
> A ban covering most general service incandescent lamps took effect in the United States in 2023
So you can't even buy them in the US anymore, either. And cheap LEDs are available everywhere, with many color temperatures to choose from.
Light temperature is one thing, but the spectrum is very different (which is essentially how it saves energy, almost all radiation emitted is a narrow band of visible light whereas an incandescent lamp produces a blackbody spectrum with a lot of radiation emitted in the infrared).
Also I notice a lot of color banding (not sure if that's the right term) with many cheap LEDs. I observe the same or at least a very similar phenomenon when watching DLP cinema projections.
Yes, it only affects airlines that have connections to the US. But if I book Lufthansa from Frankfurt to Tokyo, the PNR will still be sent to the US, for Lufthansa has connections to the US.
Yes, there are 'safeguards' in there, to shackle the DHS to be responsible with the data - but who seriously thinks the data, once in US hands, is used responsibly and only for the matters outlined in the treaty? The US has been less of a reliable partner for decades now.
Oh, right. They won't do that for financial transactions, right? Right?
https://eur-lex.europa.eu/EN/legal-content/summary/agreement...
Any proof of that claim? The agreement specifically mentions flights between the EU and the US, so any departure from that (like the scenario you describe) is unlawful, according to my own understanding.
Article 2.1 clearly states it is applicable to all EU airlines *operating* flights to or from the US. That does not mean they ONLY have to provide PNR FOR those flights
Article 3 speaks about "Data in their (the airlines) reservation systems". There's no limitation to only US-related flights.
The specific mention of flights to and from the US you are likely refer to is in the preamble, referencing a law the US set up prior.
Both document clearly define the uses cases that are applicable for the data sharing, and the second document linked by you also explicitly states that US has to put same effort to provide same capabilities to EU as well.
We elected a President who tried to lead an armed insurrection but we'll never press criminal charges because we elected him President again.
Sorry, but anything the EU has ever done pales in comparison with that.
They hope the paperwork will be complete by 2053, which will allow an EU president to, hopefully, attempt some kind of coup (if everything is filled out correctly) sometime before 2060.
It is the utter bane if "move fast and break things", and I'm so glad to have it.
I will never understand the submissive disposition of Americans to billionaires who sell them out. They are all about being rugged Cow Boys while smashing systems that foster their own well-being. It's like their pathology to be independent makes them shoot at their own feet. Utterly baffling.
Except that the person responsible for travesty of justice framing 9 innocent people in this Dutch series is currently the president of the court of Maastricht.
https://npo.nl/start/serie/de-villamoord
Remember. The courts have the say as to who wins and looses in these new vague laws. The ones running the courts have to not be corrupt. But the case above shows that this situation is in fact not the case.
> AI that manipulates a person’s decisions subliminally or deceptively.
That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Or is this limited specifically to LLMs, as OpenAI has so successfully convinced us that LLMs really are Aai and previous ML tools weren't?
> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
techcrunch simplified it.
from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.
edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons
> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm
Either the behavior in question is actually bad in which case there shouldn't be exceptions, or there's actually nothing inherently wrong with it in which case you have misidentified the actual problem and are probably needlessly criminalizing a huge swathe of normal behavior beyond just the one exception you happened to think of.
Right now, for 10 years at least, with targeted advertising, it has been completely normalised and typical to use machine learning to intentionally subliminally manipulate people. I was taught less than 10 years at a top university that machine learning was classified as AI.
It raises many questions. Is it covered by this legislation? Other comments make it sound like they created an exception, so it is not. But then I have to ask, why make such an exception? What is the spirit and intention of the law? How does it make sense to create such an exception? Isn't the truth that the current behaviour of the advertising industry is unacceptable but it's too inconvenient to try to deal with that problem?
Placing the line between acceptable tech and "AI" is going to be completely arbitrary and industry will intentionally make their tech tread on that line.
Because instead of reading the source, you're reading a sensationalist article.
> That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
----
We're going to get a repeat of GDPR aren't we? Where 8 years in people arguing about it have never read anything beyond twitter hot takes and sensationalist articles?
And in reading the act, I didn't see any clear definitions. They have broad references to what reads much like any ML algorithm, with carve outs for areas where manipulating or influencing is expected (like advertising).
Where in the act does it actually define the bar for a technology to be considered AI? A link or a quote would be really helpful here, I didn't see such a description but it is easy to miss in legal texts.
It is a simple law. You can read it in an afternoon. If you still don't understand it 8 years later, it's not the fault of the law.
> instead of 11 chapters and 99 sections
News flash: humans and their affairs are complicated
> all anyone got as a benefit from it is cookie banners
Please show me where GDPR requires cookie banners.
Bonus points: who is responsible for the cookie banners.
Double bonus points: why HN hails Apple for implementing "ask apps not to track", boos Facebook and others for invasive tracking, ... and boos GDPR which literally tells companies not to track users
That's the bit everyone forget. GDPR didn't ask for cookie banners at all. It asked for consent in case consent is needed.
And most of the time consent is not needed since I just can say "no cookies" to many websites and everything is just fine.
If even consent does not apply, then the data shall not be processed. That's the end of it.
They got a few support tickets from people who thought they were still tracking, but just removed the banner.
By putting cookie banners everywhere and pretending that they are a requirement of the GDPR, the owners of the websites (or of the tracking systems attached to those websites) (1) provide an opportunity for people to say "yes" to tracking they would almost certainly actually prefer not to happen, and (2) inflict an annoyance on people and blame it on the GDPR.
The result: huge numbers of people think that the GDPR is a stupid law whose main effect is to produce unnecessary cookie banners, and argue against any other legislation that looks like it, and resent the organization responsible for it.
Which reduces the likely future amount of legislation that might get in the way of extracting the maximum in profit by spying on people and selling their personal information to advertisers.
Which is ... not a stupid thing to do, if you are in the business of spying on people and selling their personal information to advertisers.
Corporate sites track you and need banner. Ir is intentionally obnoxious so that you click accept all.
That partially explains the state of the tech industry in the EU.
But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
Consent for tracking must be freely given. You can't give someone something in return for it.
(And they are allowed to run as many non-tracking ads as they want.)
And? With GDPR the EU decided that private data cannot be used as a form of payment. It can only be voluntarily given. Similarly to using ones body. You can fuck whoever you want and you can give your organs if you so choose but no business is allowed to be payed in sex or organs.
But how is your data that you give to Facebook “private” to you? Facebook isn’t sharing your data to others. Ad buyers tell Facebook “Put this ad in front of people between 25-30 who look at pages that are similar to $x on Facebook”
Well, per GDPR they aren't allowed to do that. Are they giving that option to users outside of EU? Why Not?
> The EU won’t let people make that choice are you saying people in the EU are too dumb to decide for themselves?
No I do not think that. What made you think that I think that?
What about sex and organs? In your opinion should businesses be allowed to charge you with those?
> But how is your data that you give to Facebook “private” to you?
I didn't give it to them. What is so hard to understand about that?
Are you saying that your browsing data isn't private to you? Care to share it?
Because no other place thinks that their citizens are too dumb to make informed choices.
> What about sex and organs? In your opinion should businesses be allowed to charge you with those?
If consenting adults decide they want to have sex as a financial arrangement why not? Do you think these 25 year old “girlfriends” of 70 year old millionaires are there for the love?
> I didn't give it to them. What is so hard to understand about that?
When you are on Facebook’s platform and you tell them your name, interests, relationship status, check ins, and on their site, you’re not voluntarily giving them your data?
> Are you saying that your browsing data isn't private to you? Care to share it?
If I am using a service and giving that service information about me, yes I expect that service to have information about me.
Just like right now, HN knows my email address and my comment history and where I access this site from.
From the European mindset: private data is not "given" to a company, the company is temporarily allowed to use the data while that person engages in a relationship with the company, the data remains owned by the person (think copyright and licensing of artistic works).
American companies: think that they are granted ownership of data, just because they collect it. Therefore they cannot understand or don't want to comply with things like GDPR where they must ask to collect data and even then must only use it according to the whims of the person to whom it belongs.
In case of Facebook (or tracking generally) you had no chance to make an informed choice. You are just tracked, and your data is sold to hundreds of "partners" with no possibility to say "no"
> Just like right now, HN knows my email address and my comment history and where I access this site from.
And that is fine. You'd know that if you spent about one afternoon reading through GDPR, a regulation that has been around for 8 years.
A distinction without meaning. Here's your original statement: "no other place thinks that their citizens are too dumb to make informed choices."
Questions:
At which point do you make informed choice about the data that Facebook collects on you?
At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
At which point do you make an informed choice to let Facebook use any and all data it has on you to train Facebook's AI?
Bonus questions:
At which point did Facebook actually start give users at least some information on the data they collect and letting them do an informed choice?
You make an “informed choice” when you create a Facebook account, give Facebook your name, date of birth, your relationship status and who you are in a relationship with, your sexual orientation, when you check in to where you have been, when you click on and buy from advertisers, when you join a Facebook group, when you tell it who your friends are…
Should I go on? At each point you made an affirmative choice about giving Facebook your information.
> At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
That hasn’t been the case since 2018.
https://martech.org/facebooks-removal-of-third-party-targeti...
With ATT, Facebook doesn’t collect data from third party apps at least on iOS if you opt out. It’s cost Facebook billions of dollars
https://www.forbes.com/sites/kateoflahertyuk/2022/04/23/appl...
> At which point did Facebook actually start give users at least some information on the data they collect and letting them do an informed choice?
https://www.vox.com/2018/4/14/17236072/facebook-mark-zuckerb...
[0] https://www.theverge.com/2018/4/11/17225482/facebook-shadow-...
So, the companies that implement these cookie banners are entirely without blame, right?
So what is your solution?
Reminder: GDPR is general data protection regulation. It doesn't deal with cookies at all. It deals with tracking, collecting and keeping of user data. Doesn't matter if it's on the internet, in you phone app, or in an ofline business.
Reminder: if your solution is "this should've been built into the browser", then: 1) GDPR doesn't deal with specific tech (because tech changes), 2) when governments mandates specific solutions they are called overreaching overbearing tyrants and 3) why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser even though it's been 8 years already?
> But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
In the long run most likely GDPR (and that's why Facebook is fighting EU in courts, and only fights Apple in newspaper ads), because Apple's "ask apps to not track" doesn't work. This was literally top article on HN just yesterday: "Everyone knows your location: tracking myself down through in-app ads" https://timsh.org/tracking-myself-down-through-in-app-ads/
So what is your solution to that?
They made no such announcement after the GDPR.
What’s my solution? There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address. For instance, neither Overcast or Apple’s podcast app actively track you or have a third party ad SDK [1]. But since they and every other real podcast player GET both the RSS feed and audio directly from the hosting provider, the hosting provider can do dynamic ad insertion based on your location by correlating it to your IP address.
What I personally do avoid is not use ad supported apps because I find them janky. On my computer at least, I use the ChatGPT plug in for Chrome and it’s now my default search engine. I pay for ChatGPT and the paid version has had built in search for years.
And yet they make no move against Apple, and they are fighting EU in courts. Hence long term.
> There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address.
Having my IP address is totally fine under GDPR.
What is not fine under my GDPR is to use this IP address (or other data) for, say, indefinite tracking.
For example, some of these completely innocent companies that were forced to show cookie banners or something, and that only want to show ads, store precise geolocation data for 10+ years.
I guess something something informed consent and server will always have IP address or something.
> What I personally do avoid is not use ad supported apps because I find them janky.
So you managed to give me a non-answer based on your complete ignorance of what GDPR is about.
What “move” could they do against Apple?
> So you managed to give me a non-answer based on your complete ignorance of what GDPR is about.
You asked me how do I avoid it? I do it by being an intelligent adult who can make my own choices
No, being free to abuse others is not a positive feature. Not for tech, not for politics, not for business.
You could point out a specific section or page number, instead of wasting everyone's time. The vast majority of people who have an interest in this subject do not have a strong enough interest to do what you have claim to have done.
You could have shared, right here, the knowledge that came from that reading. At least a hundred interested people who would have come across the pointing out of this clear definition within the act in your comment will now instead continue ignorantly making decisions you disagree with. Victory?
AI used for social scoring (e.g., building risk profiles based on a person’s behavior) - Oh, so insurance, and credit score is banned now? And background checks.
AI that manipulates a person’s decisions subliminally or deceptively. - Oh, so no more ads?
AI that exploits vulnerabilities like age, disability, or socioeconomic status. - Oh, are we banning facebook now?
AI that attempts to predict people committing crimes based on their appearance. - pretty sure that exists somewhere too.
AI that uses biometrics to infer a person’s characteristics, like their sexual orientation. - oh, my, tiktok does not even needs biometrics, just a couple of swipes. Google actually too, just where you visit.
AI that collects “real time” biometric data in public places for the purposes of law enforcement. - but cameras everywhere are ok.
AI that tries to infer people’s emotions at work or school. - like every social network, right? or a company with toxic marketing, but without ai (hello, apple with green bubbles)
AI that creates — or expands — facial recognition databases by scraping images online or from security cameras. - oh, this also probably exists. So companies could track clients.
It is probably would be as useful, as GDPR. Like of course, it sounds nice on the paper, but in reality it will get drown in a lot of legalize. Like with tracking consent in forms nowadays. Do you know which companies you gave consent and when? - me neither.
The issue with such laws, is that they are extremely wide and hard to regulate/enforce/check. But making regulation would make a few political points. While probably not so useful in real life.
We already do a lot falling under these baskets for years, big tech uses AI for algorithms left and right. "Ooopsie, we removed your youtube channel / application, because our AI system said so. You can talk to another AI system next." - we already have these, but I don't hear any reasonable feedback from EU for this.
Basically, big companies with strong legal departments would find the way around the rules. Small startups would be forced to move.
People can just handwave catastrophic decisions away with a "the computer made an error, nothing we can do". This has been the case before AI, the differrnce AI makes is just that more decisions are going to be affected by this.
What we need is to make the (legal) buck stop somewhere, ideally in a place that can positively change things. If you’re a civil engineer and your bridge collapses, because you fucked up the material selection, you go to jail. If you are a software engineer and you make design decisions in 2025 that would have had severe security implications in the 80s — and then this leads to the leaking of millions of medical records you can still YOLO it off somehow and go to work on the next thing.
The buck has to stop somewhere with software and it doesn't really. I know that is a feature for a certain type of person, but it actively makes the world worse.
Europe's tech sector will continue to wither as America and others surge ahead.
You can't regulate your way to technological leadership.
You can write about anything to make it sound bad, even when it's good, and vice versa.
Need to focus on outcomes.
Should have been
> AI that attempts to predict people committing crimes
Do you think they are going to fine their own initiatives out of existence? I don't think so.
However, they also have a completely extrajudicial approach to fighting organised crime. Guaranteed to be using AI approaches on the banned list. But you won't have get any freedom of information request granted investigating anything like that.
For example, any kind of investigation would often involve knowing which person filled a particular role. They won't grant such requests, claiming it involves a person, so it's personally. They won't tell you.
Let's have a few more new laws to provide the citizens please, not government slapp handles.
> 2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.
https://artificialintelligenceact.eu/article/59/
Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
But ECHR is not part of EU law, especially it is not binding on the European Commission (in the context of it being a federal or seemingly federal political executive). This creates a catch-22 where member states might be violating ECHR but are mandated by EU law, though this is a very fringe consequence arising out of legal fiction and failed plans to federalize EU. Most recently, this legal fiction has become relevant in Chat Control discourse.
Great Britain and Poland have explicit opt-outs out of some European law.
Your original take: "Should have been: AI that attempts to predict people committing crimes"
Article 42. literally:
--- start quote ---
In line with the presumption of innocence, natural persons in the Union should always be judged on their actual behaviour. Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof.
Therefore, risk assessments carried out with regard to natural persons in order to assess the likelihood of their offending or to predict the occurrence of an actual or potential criminal offence based solely on profiling them or on assessing their personality traits and characteristics should be prohibited.
In any case, that prohibition does not refer to or touch upon risk analytics that are not based on the profiling of individuals or on the personality traits and characteristics of individuals, such as AI systems using risk analytics to assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions or risk analytic tools to predict the likelihood of the localisation of narcotics or illicit goods by customs authorities, for example on the basis of known trafficking routes.
--- end quote ---
> Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
Key missed point: "subject to the same cumulative conditions as referred to in paragraph 1."
Where paragraph 1 is "In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: ... list of conditions ..."
-----
In before "but governments can do whatever they want". Yes, they can, and they will. Does it mean we need to stop any and all legislation and regulation because "government will do what government will do"?
I think the EU has done better following its own rules than most other countries (not that it's perfect in any way).
It might be too little too late to stop the flood though: https://www.foxnews.com/us/tech-company-boasts-its-ai-can-pr...
link to the q&a: https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
(both linked in the article)
https://vpc.org/press/states-with-strong-gun-laws-and-lower-...
I'll gladly live in a country with no AI at all. Give me Dune post-Butlerian jihad levels of AI outlawing and I'll move there. I strongly believe that myself and all the people living there will be much happier.
And the obvious whataboutism is obvious. Yes, you can find other sources for information on, say, developing bio weapons elsewhere. Does that mean you should have systems that aid you in collecting, synthesizing and displaying that information? That with the right interfaces and actuators can actually help you move towards that goal?
There's a line somewhere, that is very hard to draw, and yet should be drawn regardless.
The threshold to building any of these save nukes is extremely low, and nukes are only high because there are fewer use cases for radioactive material so it's simply less available.
I think this is a massive oversight, for a few reasons:
1. Things will continue to be done, just elsewhere. The EU could find itself scrambling to catch-up (again) because of their own regulation.
2. Increased oversight is only part of the picture, the real challenge is that even with the oversight, proving that AI is acceptably safe, or that the risk is acceptable.
3. Some things are inherently not safe, e.g. war. I know many (almost all) military tech companies using AI, and the EU is about to become an impossible investment zone for these guys.
I think this will make investment into the EU tough, given tonnes of investment is now focused around AI. AI is and will likely remain the fuel to economic growth for quite some time, and the EU adding a time/money tax to the fuel.
I think it’s more likely that companies would adhere to EU regulations and use the same model everywhere or implement some kind of filter.
When I attended a conference about this I remember the distinction between "Provider" and "Deployer" being discussed. Providers are manufacturers developing a tool, deployers are professional users making a service available using the tool. A deployer may deploy a provided AI tool/model in a way that falls within the definition of unacceptable risk, and it is (also) the deployer's responsibility to ensure compliance.
The example given was of a university using AI for grading. The university is a deployer, and it is their responsibility to conduct a rights impact assessment before deploying the tool to its internal users.
This was compared to normal EU-style product safety regulation, which is directed at the manufacturer (what would be the provider here): if you make a stuffed toy, don’t put in such and such chemicals, etc. Here, the _application_ of the tool is under scrutiny as well vs just the tool itself. Note that this is based on very hasty notes[0] from the talk - I'm not sure to what extent the provider vs deployer responsibility divide is actually codified in the act.
[0] https://liza.io/ai-act-conference-2024-keynote-notes-navigat...
There are plenty of companies in the EU using and developing AI even with the fact that Americans say we have "heavy regulation", it isn't just in the same ballpark as the US and China, which both have much bigger potential markets and a stronger VC base with of course, more money.
The lack of regulations from the US in AI creates a very harsh atmosphere for the population.
It's so naive to think that Meta/Google(Youtube) doesn't have power to manipulate people's opinion by showing content based on their algorithms. That's all manipulation through the use of AI.
They are thinking for you. Making you depressed, making you buy useless stuff.
Look on research on this subject and you will be surprised how much the likes of Meta and Google are getting away with.
Hope to see more EU fines for American Big Tech firms using AI to abuse people's weaknesses.
We have that here too, except in our case it’s the government using the good old fashioned medium of television.
Do European politicians understand that those laws are usually dead? There is no way a law like that can be enforced except by large companies.
Also, this kind of laws would make Europeans to stay at the loser side of the AI competition as China and mostly every US corporation doesn't care about that.
> Also, this kind of laws would make Europeans to stay at the loser side of the AI competition as China and mostly every US corporation doesn't care about that.
Not sure that's a game I want to win.
The law will only ensure good companies like MistralAI or Black Forest Labs will stay in the shadow.
This is same idiocy like the Republican senator who wants to prohibit Deepseek usage in US.
About legality, what's the illegal thing AI shouldn't do? Many of that knowledge can be accessible already from books, even how to build weapons or explosives
The banned use cases are very specific and concerns systems explicitly designed for such dystopian shit. AI giving advice how to build weapons or explosives is not banned here. The "unacceptable risk" category does not concern companies like MistralAI or Black Forest Labs. This is not the same idiocy.
For instance, discussing or questioning Nazism is illegal in Germany but allowed in many other countries. Should every LLMs be restricted globally just because Germany deems it illegal?
Similarly, certain drugs are legal in the Netherlands but illegal in other countries, sometimes even punishable by death. How do you handle such discrepancies?
Let's face it: most of the time, LLMs follow US-centric anti-racism guidelines, which aren't as prominent or necessary in many parts of the world. Many countries have diverse populations without significant racial tensions like United States and don't prioritize African, Asian, or Latino positivity to the same extent.
Moreover, in the US, discussions about the First or Second Amendment are common, even among those with opposing views, but free speech and gun rights are taboo in other societies. How do you reconcile this?
In practical terms, if an LLM refuses to answer questions because they're illegal in some countries, users will likely use uncensored models instead, rendering the restricted ones less useful. This is why censorship is never successful except by North Korea and China.
Take Stable Diffusion as an example: the most popular versions (1.5, XL, Pony) are flexible for unrestricted use, whereas intentionally censored versions (like 2.1 or 3.0) have seen limited adoption.
I, for one, welcome our Chinese communist overlords.
A vibrant tech ecosystem is a large part of the reason for both.
Other fields have very similar laws in the EU and there's lots of tiny companies able to comply with those. The risk control required by this law is the same that's required by so many other EU laws. Most companies that make high-risk products have no problem at all implementing that.
That's not true. The regulation first defines high-risk products with a narrow scope (see article 5 and annex III). It then requires risk management to be implemented. It does explicitly state what risks are acceptable, it only requires the "adoption of appropriate and targeted risk management measures" that are effective to the point that the "overall residual risk" of the is "judged to be acceptable".
IANAL, the whole story is a bit more complex. But not by much.
That is similar to, say, some substance being banned above a certain concentration.
Information from AI is like moonshine. Too concentrated; too dangerous. There could be methyl alcohol in there that will make you go blind. Must control.
Only making use (i.e. putting into service a product containing it or placing that product on the market) of that function in a manner that is listed in article 5 (which is quite terse and reasonable) is prohibited unless covered by an exception.
Making use of that function in a manner that may be high-risk (see article 6 and annex III, also quite terse and reasonable) leads to the requirement of either documenting why it isn't high-risk or employing measures to ensure that the risk is acceptable (see article 9, item 5).
IANAL
US did a real gift to the world with "extra-territorial" laws: now EU use it everywhere too !!!! :-)
Sooooo... the GAFAM either will have to "limit" some of their AI system when used in EU (NOT including EU citizen that may be abroad, but including foreign citizen in the EU) or to be fined.
And I guess that this kind of fines may accumulate with GDPR for example...
This is a strange one. Arguably this is the objective of marketing in general. Therefore, I'm not sure why draw the line only when AI is involved.
I know how to make chemical weapons in two distinct ways using only items found in a perfectly normal domestic kitchen, that doesn't change the fact that chemical weapons are in fact banned.
"""The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market, or its use has an impact on people located in the EU.
The obligations can affect both providers (e.g. a developer of a CV-screening tool) and deployers of AI systems (e.g. a bank buying this screening tool). There are certain exemptions to the regulation. Research, development and prototyping activities that take place before an AI system is released on the market are not subject to these regulations. Additionally, AI systems that are exclusively designed for military, defense or national security purposes, are also exempt, regardless of the type of entity carrying out those activities.""" - https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
Also note that the law has explicit exceptions for research, development, open source and personal use.
Also, the definition of AI seems to exclude anything that doesn't "exhibit adaptiveness after deployment". So, a big neural network doing racist facial recognition crime prediction isn't AI as long as it can't learn on-the-fly? Is my naive HTTP request rate limiter "exhibiting adaptiveness" by keeping track of each customer's typical request rate in a float32?
Laws that regulate tech need to get into the weeds of exactly what is meant by the various terms up-front, even if that means loads of examples, clarification etc.
FTFY: AI that attempts to predict people committing crimes.
By "appearance" are they talking about a guy wearing a hoodie must be a hacker or are we talking about race/colour/religious garb etc?
I'd rather they just didn't use it for any kind of criminal application at all if I have a say in it!
Just my $0.02
The Techcrunch article oversimplifies and is borderline misleading.
Instead of relying on Techcrunch and speculating, you could read sections (33), (42), and (59) of the EU AI Act yourself.
- Could you tell from an image if a man is gay?
- Depending on what he is doing.
yes
With time this is worsening, the caste is ever bigger, and the system will not change until a WW2 type situation.
It is not like it was safe democracy. But, it is still one and one that cares more about own citizens then the rest. Maybe except Canada.
Here is what happened in most corporations when GDPR came out:
- An new Chief Privacy Officer would be appointed,
- A series of studies would be conducted by big consulting firms with a review of all processes and data flow across the organisation,
- After many meeting they would conclude that a move to the cloud (one of the big ones) is the best and safest approach. The Chief Privacy and Legal Officer would put their stamp on it with some reservations,
- This would usually accelerate a lot of outsourcing and/or workforce reduction in IT,
- Bonus if a big "data governance" platform is bought and half implemented.
Do you have a source on that, or is this what you feel like may have happened? The move to the cloud was in full swing way before GDPR came out in 2016 and got enacted in 2018. Same for outsourcing.
In terms of timeline I can tell you:
- by 2012 I already heard about that regulation but only knew it was gonna be about data protection. At that time some "Big tech" lobbying groups were already organising events in Brussels raising awareness about how important is data privacy and protection. I have been to some of those events and I even witnessed very heated exchanges between some EU people and and lobbyists about that.
Proof is a lot of people knew way before that time.
- by 2014 many big corporations were already preparing for GDPR, big budgets have already been validated. At that time they already knew it would be at least reasonably disruptive and they had to start early to prepare.
Also remember before 2014 "Windows Azure" (what would become the most successful cloud for most European corporations) was absolutely not ready as a enterprise product.
So those are not Silicon Valley startups on AWS since 2006, for many decision makers in those big corporations the GDPR upcoming problem predate the cloud solution.
GDPR applies to data in cloud too.
“Where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.”
…
what follows is a list of some pretty nasty and insidious use cases.
it’s not “AI is completely banned”, it’s “consider the use cases you are working on responsibly”. only for those specific use cases, mind you.
for all other use cases not in the list, which is a significantly larger subset of development, just ensure you do the required safety/regulatory sign off work.
just like when we get our SaaS webapps evaluated for compliance to security standards, its just a box ticking exercise for the most part.
When I talk to ChatGPT advanced voice mode with a happy and upbeat tone, it replies similarly. If I talk to it in a lot neutral way it does adapt. The AI thus infers my emotions. I use ChatGPT at work, my company pays for it.
Sounds like I should sue.
Also, I am trying to implement a new policy for pull request in my tech team. We send an anonymous form to gather feedback. I sent all the responses in one block to ChatGPT and asked it to summarize the feedback. The AI indicated that “generally people seem pretty happy about the new policy”. Should I go to jail now for being clearly a deranged madmen according to the EU ?
> the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions
emphasis mine.
chatgpt was not specifically put into service to monitor your emotions at work.
so it’s fine i’d say. and your pull request thing is fine.
also, you’re not trying to infer the emotions of any specific natural person. you’re trying to guage satisfaction of a process. that’s different to working out whether someone is feeling sad or feeling lonely in the workplace because they “aren’t smiling enough”.
unfortunately that’s means you can’t sue and get a pay day.
edit — i find it kind of funny that people are knee jerk reacting emotionally to this. kind of ironic when you consider the example at hand.
It depends highly on not only how its written but the spirit of what the EU is attempting to do. The knee jerk reaction is probably that historically institutions do a terrible job of writing rules and especially rules around new technology.
So basically I just need to second guess everything I do, until someone somewhere gets sued and loses and another dude gets sued and loses. At that point we will have some idea about what the law really entails (at which point they change it and the cycle restarts)
In the meantime, my US competitors are just moving full steam ahead.
A quick search reveals DeepMind, Skype, SwiftKey, Shazam, Moodstocks for the former. Bit of overlap with the latter, too, as e.g. AlphaFold is from DeepMind after getting bought.
Quick look on the Apple App store also gets me Komoot (Germany), Trade Republic (Germany), Revolut (UK), Babbel (Germany).
Aside from them, ETH Zürich and CERN are doing pretty good work, too, the latter inventing the modern hypertext based web on which you are currently reading this.
Cambridge has some decent digital tech, also has Metalysis and The Welding Institute and was where the double helix structure of DNA were found and where Stephen Hawking chose to work.
1) why they were bought by the American companies
2) that having an American owner doesn't make them directly American or magically cause them to be in Silicon Valley
3) The country names I put in brackets
4) The location of Zürich and CERN
And instead want to focus on the fact that one specific example of a top ranked app is not all by itself an entire sector, while ignoring all the other examples *right next to it* or the fact that this was trivial to find.
To demonstrate why you're missing the wood for the trees, consider: I can accurately say "Facebook" isn't really all that important, it's just an advertising provider getting in the way of people trying to talk to each other — but that it isn't all of Silicon Valley all by itself doesn't mean its headquarters are not relevant as an example of "Silicon Valley".
Can we use this to ban X and other American algorithmic political propaganda tools?
I am p sure that they made like major major changes after they code dumped it. Considering "verified boosts" and "elon boosts" are very noticable, with the first being an confirmed "feature", I doubt the algorithm would even remotely work with data from nowadays.
Anyway, what I wanna say is that the last commit was over 2 years ago.
It’s maddening that a group on non elected politicians and their friends have this kind of power and are using it to destroy Europe and our future.
Or is it something that is open to interpretation, let the courts sort of out and fine you 15,000,000 euros if someone in the EU has leverage on the courts and doesn't like you?
Oh and the courts will already kill any small startup.
> to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
more details in recital 42: https://artificialintelligenceact.eu/recital/42/
in addition, having front doors idk. calling the police on people just because it's a "unusual" situation would be quite dystopian and would most likely for society as a whole lead to far more damage then it would prevent so instead of your door trying to "detect maybe soon to happen crime" it could "try to detect unusual situation which might require human actions" and then have the human (you someone or in a call center if you aren't available) do the actions (which might be just one or two button presses, nothing prevents you to take the action by directing the AI to do it for you)
and lets not forget we are speaking about before the break in (and maybe no break in at all because it's actually a Halloween costume or similar), if the system detects someone braking in we have a action
Arguably, an AI security system with great objective understanding of the unfolding circumstances would be a lot better than one profiling people passing by and raising an alarm each time a person that looks a certain way walks by.
It’s just that simple CV-based classification, perhaps trained with unsupervised learning, is easier in AI than observing a chain of actions. The labelled data set is usually accessible from police orgs if you want to simply train an AI to look at people and judge them based on visual traits. By the EU saying “this easy way is not good enough”, it is encouraging technological development in a way. Develop a system that’s more objective than visual profiling, and the market is yours.
Until another braindead legislator finds another thing he can rally against and throws a stick between my legs.
There are reasons innovation happens in China and - to a lower extend - in the United States. This is one of them.