Yeah, they take that into account when calculating inflation, which is why CPI is quoted at 1% when the cost of housing, food, gas, health care, college tuition (i.e. everything you actually need) is more like 10%.
I see a lot of similarities between "AI", as understood by business people, and the pursuit of alchemy and the philosophers stone. It seems like a hustle to separate rich dupes from their money by promising them the keys to infinite wealth, immortality, mars colonys, etc. In that respect it is mostly harmless but it can be quite dangerous to the people who take it seriously.
I don't understand why people so seldomly go straight for the knowledge. It seems pretty obvious by now that knowledge is the key itself. Though subjective sciences like Kabbalah help.
This is a tired trope, that “business people” are brainless and gullible. Almost as tired as the “VCs are so dumb they’ll throw money at anything-AI” idea repeated in the article.
Maybe, just maybe, the people running multi-billion-dollar companies, and multi-billion-dollar investment funds, are not stupid?
It would be much more interesting and productive to discuss what they see in AI and why they feel so much urgency, rather than dismissing them as fools falling for magic.
I can only speak for managers I have met: what they “see” is a promise of cost cutting and an opportunity to tell higher managers that they are ahead of the hype. But never, ever, have I seen one of these people say anything that suggests they have even a vague idea of what AI or ML is or what it can realistically achieve. And I’m not sure they are interested. Because as you say - they aren’t stupid - I just think they are part of a game of BS I don’t understand, involving higher managers, investors etc.
I don’t think it’s so much about about producing anything using AI, it’s AI for the sake of saying you are using it.
So perhaps not all fools but somewhere between con artist, willfully ignorant and fool.
(Note: this is all from “traditional” industry, I.e the manager at the hammer factory proudly launching initiatives to “use more AI” in the factory. Not tech industry. Not plausible or concrete use of AI)
You are repeating the trope, just substituting “stupid” with other deriding terms: BS, con artist, willfully ignorant, fool...
Does it really have to be those things just because you don’t understand it? Is it possible those people running multi-billion-dollar organizations just know something you don’t, or have a perspective that you don’t?
> Is it possible those people running multi-billion-dollar organizations just know something you don’t, or have a perspective that you don’t?
While it is possible, I'm kind of tired of hearing people defending the strong and powerful. And I'm kind of tired of hearing the perspective of CEOs. It's pretty much all we hear in fact.
People 'running multi-billion-dollar organizations' are like the greek gods to us: capricious, arbitrary, and powerful, but also subject to the same flaws as normal people.
However, they do have one great privilege: that of being completely above the effects of their actions.
They don't need people to defend them. The whole damn system is designed to adulate them. We 'pray' to these people very day without realizing it.
Updated my answer: these are middle managers such as division or site managers not the top managers of the billion dollar corporations (who might well have great visions but it’s not on them to implement it.). As I clarified, these projcects are always vague initiatives such as “use more ai in our process” or downright marketing stupidity such as “joe, I need you to work AI into the description of our latest hammer model”. Again this is old, traditional industry.
Sounds like a great opportunity to provide value and charge accordingly.
Baking off-the-shelf anomaly detection into the hammer QA process might seem easy and “not true AI” to us, but it solves their problem. Maybe you can even educate them in the process, and explain the differences between AI, ML, DL, different use cases and methods, libraries, etc. I suspect, though, they won’t care because they just want to hit their business goals.
You are right that successful business people are on the whole pretty intelligent. This isn’t always true of investors as individuals, but investment management tends to be reasonable smart too; some very much so.
I think FOMO is a real part of the cycle that causes a lot of bets to be made on untried technologies. Technologists might well reflect on how many of those bets are hedges, and what the true cost of failures are.
It’s also worth noting that a significant chunk of the most egregious bullshit I see flung around in the area is coming from technologist driven startups in the space...
Laws like this are long past due. This particular law may not be the best implementation but governments do need to take action to provide their citizens with a real public square online.
Excluding fake and paid users (without a declaration of who is paying them) from that space and protecting free speech in that space is essential to having a public policy discussion.
Private corporations have not done this, and probably never will, so government is the only option left.
The government may and should provide their citizens with a public square online, but neither do they need to take away anonymity on existing squares, nor does the public square need to be non-anonymous. The concept of being able to go out on the streets, make mistakes, and have them be forgotten is old and integral to public spaces IMO.
I think that governments should provide a verifiable digital ID the same way they provide physical IDs and that they should provide communications platforms that allow people to communicate with these digital IDs in a way that protects their rights the same way that they provide mail service.
I do not think that governments should be regulating/banning activity on private platforms but should be offering an alternative where private platforms fall short. This is really nothing more than bringing existing government services (ID, mail, voting, parliaments) fully up to date with digital technology.
There is a big gap right now where governments don't understand what they should be doing or are incapable of doing it. They recognize the need for action because they see that things are going wrong but are not proposing or implementing the correct solutions yet.
WTH has this to do with mail service - which inherently allows anonymity for the sender?
If anything, analogous to this law, no one would be allowed to send a letter without ID (which also means goodbye post boxes, I guess).
Oh, and if we want to extend the analogy - with current and pending legislation, the mail service would be liable if anyone breaks the law via mail...
This law is like having to provide and record ID when entering a concert, restaurant or other public places. It's an authoritarian dream, but will be shot down by courts, that's for sure.
I disagree. Laws like this are outwardly hostile to the culture of innovation that the internet enables.
> This particular law may not be the best implementation but governments do need to take action to provide their citizens with a real public square online.
I disagree. Governments do not need to provide a public online discussion forum. Plenty of forums already exist. Reddit, Facebook, Hacker News, various phpBBs, deviant art, the chans, tumblr, mastodon, etc. The beauty of the internet as it stands is that if you feel like you don't have a place to speak your mind online, it's extremely easy, and in many cases, completely free to create your own blog or forum. Government should not be involved in public discourse in any role besides that of an observer. It is important for a government to hear the words of it's people, but it is ridiculous to suggest that a government should hold control of where people can have discussions or who can have discussions. This law is authoritarian at best, dystopian at worst.
> Excluding fake and paid users (without a declaration of who is paying them) from that space and protecting free speech in that space is essential to having a public policy discussion.
Protecting free speech is very important. However, this law does no such thing. In fact, it makes it easier to persecute people whose opinions you don't like. Requiring identification for the exercising of free speech is not free speech at all.
> Private corporations have not done this, and probably never will, so government is the only option left.
Regulation on how private companies handle their users is not inherently a bad thing. This law, however, is not only stifeling to free speech, it prevents smaller players from entering the game. It makes it hard for individuals to create their own small online communities by forcing them to navigate murky legal waters.
This law is bad. Everything about this law is bad. There is no single part of this law that is good for the internet.
It's only a good thing if you're not going to say or be anything which would get you beaten up if everyone were to know about it. It's easy for people who are, in the example of Austria, White, straight, cis, and either Christian or, at the very least, not Muslim, to decry anonymity, but if you're a member of a disfavored minority group, all de-anonymization does is does is open you up to abuse, discrimination, and, perhaps, violence.
You can take as an example the problems Facebook's "Real Names" policy caused for sexual minorities:
... except multiply it by a thousandfold when the government is in the pocket of a political party which is... disinclined... to acknowledge the existence or legitimacy of such people.
While de-anonymising users may or may not work to stop individual trolling and other abuse (the jury is still out); it most definitely enables organized troll groups. Therefore it's actually not a good idea at all.
The government isn't providing a public square. Internet is already that and more. The government is enforcing everyone to wear visible name tags by threatening violence on those who disobey.
(4) knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value
A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.
> A criminal investigation into whether or not this was really accidental would be entirely warranted here. If there was intent to access this information without authorized access that is criminal.
I don't understand this. Claiming that something is an accident and not intentional usually isn't much of an excuse where it comes to the criminal acts.
"obtaining anything of value" could be satisfied by getting personal data which today is akin to profit, but the "intent to defraud" would be hard to prove in court, save for some very broad and dangerous intepretation of "intent" which could equal sloppiness to malice, a precedent that might ruin the lives of honest people who just happen to be clueless sysadmins or developers.
Totally agree though on investigating whether this was really accidental or not; if it was done on purpopse I would expect FB to be hit really hard.
Not a lawyer, but at least in my jurisdiction, fraud requires a monetary loss by the victim.
Generally, civil law is better suited for this sort of thing, no matter how good a pitchfork feels in your hand. As but one of the reasons, the required standard of proof is much lower.
Yeah, 18 USC 1030 (a)(2)(C) might be a better fit:
> Whoever ... intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains ... information from any protected computer ... shall be punished as provided in subsection (c) of this section.
(The definition of "protected computer" encompasses any computer that is "used in or affecting interstate or foreign commerce or communication".)
There’s got to be a monetary loss here. If there isn’t precedent for calculating that loss, such precedent should be established. Our email contacts are valuable, especially at 150m user scale. We could have all banded together and sold them, had Facebook not stolen them. These users should be compensated.
Of course. If the email contacts were of no value, Facebook wouldn't be taking them from accounts. People tend not to to steal worthless assets. Unfortunately, monetary loss for the user may be tougher to prove than monetary gain for the thief.
> There’s got to be a monetary loss here. Our email contacts are valuable.
Why? Nobody lost their contacts, so what’s the $ amount it cost them? Facebook claims they’re deleting them. If that’s true, then Facebook isn’t gaining from the contacts. If users don’t lose anything and if Facebook doesn’t gain anything, what is the monetary loss?
> especially at 150m user scale
Where’s that number coming from? The article talks about 1.5 million users.
> We could have all banded together and sold them, had Facebook not stolen them.
So while it’s entirely true that contacts should never be copied without consent, and that’s exactly what happened, I guess don’t forget that these users consciously gave Facebook their passwords. No matter how much I trust what someone says they’ll do, my email account password gives access to everything in my email account, I’ve always thought it was a terrible terrible idea to ever do it when connecting services together, for this very reason. I’m saying it’s partly the users responsibility, and the outcome here is predictable, because it has been predicted before by many people.
BTW, nothing stopping you from banding together and selling email addresses now, if you think it’s a good idea... the blip with Facebook is not in any way preventing that from happening.
>don’t forget that these users consciously gave Facebook their passwords.
There is a lot of legal precedence about social engineering and how to prosecute it, this would completely fall under fraud. If I ask someone for their password to perform some service and they then I copy all of their data, that is a crime regardless of how stupid they are.
This really doesn't matter at all in a case of fraud if you gave the password willingly, it is under false pretense. If someone asks me to give them something so that they can provide a service or take those things as an investment. I willingly give them those things yes, but we have a written, verbal, or implied contract that they will do and will not do certain things with that information. Failure to follow our agreement and instead robbing me is a crime.
Hey I’m 100% with you. I’m not defending Facebook, and it’s crazy they ask for passwords. But just because Facebook’s at fault doesn’t mean that it’s okay as a user to give out your password, nor does it mean that you lost any money when contacts are copied, right? The words “stealing” and “robbing” don’t really convey what happened here, even in the case Facebook isn’t telling the truth.
You are saying that the words 'rob' and 'steal' don't convey what has happened here, but this is only true in the colloquial sense. There is a good reason why many legal codes and laws start off with a exhaustively long list of definitions. Legal definitions often are different in very subtle ways that maybe aren't apparent at first glance.
If you don't think that this is the proper framing, maybe consider a different one. It is clear that there is definitely room to interpret this as a civil or criminal act regardless of how the parties craft their arguments. For example, imagine an employee that copies company data, even if it has no actual value under their authorized username/password on the last day of work. This is often charged as a clear criminal offense. So to reiterate, employee with authorization to access dataset, copies a large dataset with no obvious monetary value on their last day of work, but one that they weren't given permission to copy. There are cases that have been literally this, and it is easy to see how this incident could line up with this legal approach.
I think you are fixating too much on a critique of the specific charge listed by the top of this thread. I was defending the idea that there would probably be a way to go about mounting a case in that way. You seem to think that this is the incorrect legal framing for this, which is totally fine. The legal process is more of a subjective art than a science.
What made you think I was fixating on anything? I just agreed with you that Facebook's action is at least negligent and could be criminal. I guess I'm fine with the word stealing in the sense of information theft. Still, Facebook claims it was an accident and that the data is being deleted. It might have been intentional, but I'd wait to call it intentional until proven, even though they've done it intentionally in other cases. :)
All I'm really saying is, no matter what, don't give out your password. And if you do, don't pretend to be shocked when something bad happens.
> Nobody lost their contacts, so what’s the $ amount it cost them?
Opportunity cost? If Facebook has these contacts now, then their third parties have them, so those contacts are no longer as valuable, if valuable at all.
> Where’s that number coming from? The article talks about 1.5 million users.
My bad, added two orders of magnitude by accident. I knew something was off there. Thanks for the correction.
> Opportunity cost? If Facebook has these contacts now, then their third parties have them, so those contacts are no longer as valuable, if valuable at all.
We don't know that's true, I would be cautious about making assumptions. But, even if we assume it is, opportunity cost isn't equivalent to financial loss, so we can't say people lost money they weren't already making.
Anyway, I don't think email lists being sold has prevented email addresses from appearing in other lists. It's clear to me that nobody is tracking the value of my email address because marketers keep buying it over and over.
That said, from my point of view, I don't like the idea of selling my own email address or trying to extract money from it. I don't want that, and I don't agree with the idea of selling my privacy in order to battle my concerns about Facebook taking and/or selling my privacy. The selling of my privacy is the very thing I don't want to have happen.
Privacy is not a monetary value for me, it's something I value having, not something I value selling. I don't want it to be subject to capitalist thinking and market analysis.
I think 'monetary loss' has a bit more of a meaning of actual money or assets lost, not potential to earn money that you weren't really planning on using being lost. Not saying I think it's not an issue! But I don't think the term 'monetary loss' is applicable.
Unfortunately not a lawyer so even my creative reinterpretation is moot but I was thinking along the lines of class action. Why can’t that group of people form a class? Is there really no damage here?
Of course we need actual fundamental privacy protection.
The statute says "anything of value." Here the thing of value would be a person's contact list. The attempt to gain this thing of value through deceit (telling the person you are trying to verify their account and using the access they give you to steal their contact list) would be the fraudulent act.
The fact that Facebook put a system in place to obtain these contact lists is evidence on its own of their value, but that value could also be quantified without much difficulty.
The only real question is: was dropping the consent form without removing the feature an honest mistake or was it done because somebody decided it would result in a lower bounce rate and thus more money for Facebook.
If criminal law isn't capable of handling a hacker who hacked 1.5 million victims, criminal law is broken.
(If Facebook changed its name to Lulzsec2.0 of course the FBI would be very interested in the situation.)
And while the previous commenter quoted the part of the CFAA that mentions fraud, fraud isn't necessary to violate the CFAA. All you need to do is exceed authorized access to any internet-connected computer. Is there any doubt that Facebook has admitted to doing that?
It's not hacking. It's social engineering. It's no different than some smooth talking "Nigerian" getting your grandmother to cut a check. No systems were hacked here, no technical errors or design loopholes were exploited. People were persuaded into doing things that gave Facebook the access it needed to obtain the contact info.
There's no law that makes "hacking" a criminal offense. This particular case is just manipulation/social engineering so you probably shouldn't be calling it "hacking" on a message board that's mostly populated by software professionals to whom "hacking" has a meaning that does not include what is basically a con-man trick (though I see you have already edited the parent comment to reflect this).
We were literally just discussing the law that makes hacking a criminal offense. The Computer Fraud and Abuse Act makes it a federal offense; most if not all states also make it a state crime; most if not all other countries also make it a crime in their jurisdictions.
And yes, tricking someone into giving up their password is hacking (as any hacker will tell you), and it is a crime to use that password to swipe someone's contact database.
I'm not sure I can continue this thread with you because it seems you are very confused. I have also not edited any comments here.
Simply asking for email passwords indicates an intent to gain unauthorized access, and disguising the request as being part of a security-enhancing action eliminates all doubt.
Human cognition is riddled with exploitable defects. Biologically we are basically just highly pretentious and neurotic monkeys. All of human history is full of people looking for someone to blame for their condition (gods, devils, spirits, corporations, etc) but it never changes.
Keep in mind, from an evolutionary perspective we are exactly the same people who were burning witches at the stake and throwing people in lakes to determine their criminal culpability a few hundred years ago. We just have a different set of superstitions and delusions now.
I get it, is that a thing you want to fix though, or to prevent? Education or regulation or something else, if we take as a given that the market has failed?
I wrote a comment twice (and deleted it as inflammatory, it’s a fact about reality that is largely counterproductive to debate) where I pointed out that Donald Trump is the president of the U.S. and it’s wonderful or terrible depending on your point of view. I don’t want to derail all conversation into irrelevance but what if the “wrong thing” is what people want? What do we even want anyway? Is “exploitation of cognitive errors” just a thing we say when people decide they want things we think are dumb?
The true horror: what if this is the least worst thing people want?
How can you tell the difference between your own personal hangups and absolute moral truths? That's what we're really debating here. Good luck solving that one in an afternoon...
Option one, check with your community. That's the approach the pro-censorship side is taking, they only propose censoring content that every acceptable silicon valley individual thinks is objectively harmful. They also support drowning witches. Wait, no, wrong culture, they got it wrong, we got it right this time we promise.
Option two, check with the conscience of the accused. Human beings regret most of their decisions, and people are their dumbest in the heat of the moment, and further people tend to dig in when pushed from the outside, so maybe the path to virtue can only be people improving themselves. However if you're convinced that your enemies are all a bunch of complete evil psychopaths then that won't work because clearly psychopaths don't do that.
Option three, optimize for something completely unrelated, and pretend to be motivated by whatever suits your goals - if morals are "in," then pretend to be moral. That's probably what's going to happen if we can't pick between 1 and 2.
Option four, have an external standard that is neither based on the community nor the individual. But the problem is, that standard has to be right. If it's wrong, then it's going to be used to censor things that contradict it, and therefore enforce the wrongness. So you need a clear standard of what's right. Christianity once furnished such a standard for the West, but no longer. The closest we have now is the law, but that's halfway to option one.
> is that a thing you want to fix though, or to prevent?
wanting to fix is pretty obvious. capable of fixing is a different story. biologically there is no fix (genetic engineering maybe? but that is super sci-fi). so we can try to correct with technology or social conventions but those fixes never change the underlying biological defects so how effective can they really be?
Look at the social engineering aspects of organized Christianity. How well did those work? Look at the social engineering aspects of the U.S. experiment like universal education and literacy. How well did those work?
It is a little bleak but in reality universal literacy has been a complete failure (in the U.S. at least). Probably 80% of the population is at the level of what used to be called "knowing your letters" but they are functionally illiterate (they have never done any significant amount of reading in their life, and aren't really capable of it). That is not a popular opinion, at least not for public consumption, so we can't even begin to address the issue because we refuse to acknowledge that it exists.
If you include the notation used in math and science, then the actual level of literacy is at the same levels as you would have seen 500 years ago. It’s actually kind of similar; the liturgical class was literate enough to read and study the Bible, and the masses relied on them to parse the information. Today, it’s statistical models, but the concept is the same.
As a side note, what always amuses me when I see phrases like "what people want" is that it sounds like there are all the people, and the speaker is not them.
I suppose the situation is usually more complex; in the simplest case it's just "us vs them", but likely it's a more detailed separation of groups, based on culture, values, etc. Thinking a bit more explicitly about that, and especially about who are "we", the ingroup, could be beneficial.
What is this that you refer to exactly? The deep non uniformity of our current society and politics, globally, indicates that there is, at the very least least, a lot of room to maneuver and improve. Liberalism, for example, likes to push the narrative of the least bad option, but often it does so to reject ideas that have a clear precedent of working better than the ongoing decline of equality and flight of alienated workers towards reactionary politics we have now.
In the U.S. a large segment of the population (100M+) is told over-and-over again that they should rely on the expertise and charity of the elite in business and politics to take care of them.
Examples like this just show how naive and detached from reality that self appointed elite is and how little value their "charity" provides.
"self appointed elite" is a weird bag of people, which would likely contain Bill Gates, Larry Ellison, and many random CEOs in between the spectrum. Their approach/value is radically different, so it's generalising here useful?
The problem with "objective" journalism is that truth and falsehood are not as important as what the objective of the story is. If a propagandist can use the truth to achieve their objectives that is better than lies because people can detect lies easier. Objective journalism is based on the false premise that reporting facts differentiates you from agenda driven propaganda when it is really just the most effective form of agenda driven propaganda.
> Will also never leave a good review when I'm satisfied ... tend to rather just leave bad reviews when I'm not satisfied
This is basic consumer behavior. Receiving a good experience (either service or product) is not notable because you paid for it and expect it, whereas a bad experience is offensive and makes you feel cheated so you retaliate by taking your time to leave a bad review.
What this means at scale is that most positive reviews are fake except for the truly extraordinary products/services that are far above all their peers in terms of quality or novelty.
In order to get decent reviews you have to be able to verify that the consumer actually paid for the product/experience and then you have to apply some sort of sampling methodology and statistical analysis to arrive at a meaningful relative score to other products/services in the same industry.
No review site has any interest in doing this because they are just using reviews to generate free content for SEO, to put ads on, and to extort businesses into paying them to "manage" negative reviews in various ways.