It's Time to Stop Taking Sam Altman at His Word (theatlantic.com)

586 points by redwoolf 10 hours ago

478 comments:

by lolinder 8 hours ago

To recap OpenAI's decisions over the past year:

* They burned up the hype for GPT-5 on 4o and o1, which are great step changes but nothing the competition can't quickly replicate.

* They dissolved the safety team.

* They switched to for profit and are poised to give Altman equity.

* All while hyping AGI more than ever.

All of this suggests to me that Altman is in short-term exit preparation mode, not planning for AGI or even GPT-5. If he had another next generation model on the way he wouldn't have let the media call his "discount GPT-4" and "tree of thought" models GPT-5. If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit, and he likely wouldn't have gotten rid of the superalignment team. His actions are best explained as those of a startup CEO who sees the hype cycle he's been riding coming to an end and is looking to exit before we hit the trough of disillusionment.

None of this is to say that AI hasn't already changed a lot about the world we live in and won't continue to change things more. We will eventually hit the slope of enlightenment, but my bet is that Altman will have exited by then.

by pj_mukh 3 hours ago

Except the article makes none of these points. The article is saying:

a) "the technology is overhyped", based on some meaningless subjective criteria, if you think a technology is overhyped, don't invest your money or time in it. No one's forcing you.

b) "child abuse problems are more important", with a link to an article that clearly specifies that the child abuse problems have nothing to do with OpenAI.

c) "it uses too much energy and water". OpenAI is paying fair market price for that energy and what's more the infrastructure companies are using those profits to start making massive investments in alternative energy [1]. So if everything about this AI boom fails what we'll be left with is a massive amount of abundant renewable energy (the horror!)

Probably the laziest conjecture I have endured from The Atlantic.

[1]: https://www.cbc.ca/news/canada/calgary/artificial-intelligen...

by baking 2 hours ago

>So if everything about this AI boom fails what we'll be left with is a massive amount of abundant renewable energy

Except that someone has to pay for it. AI companies are only willing to pay for power purchase agreements, not capital expenses. Same with the $7T of chip fab. Invest your money in huge capital expenditures and our investors will pay you for it on an annual basis until they get tired of losing money.

by lrg10aa 8 hours ago

It does look like an exit. Employees were given the chance to cash in some of their shares at $86 billion valuation. Altman is getting shares.

New "investors" are Microsoft and Nvidia. Nvidia will get the money back as revenue and fuel the hype for other customers. Microsoft will probably pay in Azure credits.

If OpenAI does not make profit within two years, the "investment" will turn into a loan, which probably means bankruptcy. But at that stage all parties have already got what they wanted.

by jameslk 6 hours ago

> If OpenAI does not make profit within two years, the "investment" will turn into a loan, which probably means bankruptcy.

I don’t believe this is accurate. I think this is what you’re referring to?:

Under the terms of the new investment round, OpenAI has two years to transform into a for-profit business or its funding will convert into debt, according to documents reviewed by The Times.

That just means investors want the business to be converted from a nonprofit entity into a regular for-profit entity. Not that they need to make a profit in 2 years, which is not typically an expectation for a company still trying to grow and capture market share.

Source: https://www.nytimes.com/2024/10/02/technology/openai-valuati...

by agentcoops 6 hours ago

I think for anyone who has been intimately involved with this last generation of "unicorns", it really does not look like an "exit" if that means bankruptcy, end of the company, betrayal of goals, or what have you: for a high-growth startup of approximately ten years, finding liquidity for employees is perhaps counter-intuitively a sign of deep maturity and that the company is concerned for the long haul (and that investors/the market believe this is likely).

At about the ten year mark, there has to be a changing of the guards from the foot soldiers who give their all that an unlikely institution could come to exist in the world at scale to people concerned more with stabilizing that institution and ensuring its continuity. In almost every company that has reached such scale in the last decade, this has often meant a transition from an executive team formed of early employees to a more senior C-team from elsewhere with a different skillset. In a world context where the largest companies are more likely to stay private than IPO, it's a profoundly important move to allow some liquidity for longterm employees, who otherwise might be forced to stay working at the company long past physical burnout.

by throwup238 3 hours ago

Agreed. These liquidity events are a way to retain and reward early employees while allowing the founders to recover their personal finances and buckle down for the growth phase.

by lotsofpulp 5 hours ago

> In a world context where the largest companies are more likely to stay private than IPO

Which world is this?

by dash2 4 hours ago

From a Scottish Mortgage report: "companies are staying private for longer and until higher valuations".

    Amazon founded in 1994
    1997 listed at $400m
    Google founded in 1998 
    2004 listed at $23bn
    Spotify founded in 2006
    2018 listed at $27bn
    Airbnb founded in 2008
    2020 listed at $47bn
    Epic games founded in 1991
    2022 unlisted value $32bn
    Space Exploration founded in 2002
    2022 unlisted value $125bn
    ByteDance founded in 2012
    2022 unlisted value $360bn
by lotsofpulp 4 hours ago

The only unlisted business there that could be considered among “the largest” is ByteDance. But they have a whole China thing going on, so not very representative of US/European markets.

I doubt any competitor to the largest businesses in US//Europe that is actually putting up good audited numbers is staying private. Even Stripe has been trying to go public, but it doesn’t have the numbers for the owners to want to yet.

by manquer 3 hours ago

SpaceX hasn’t tried to go public , arguably being private has helped them ability to raise a lot of money from investors is key to their successes.

Their kind of product development [1] needs a long term thinking which public markets will not not support well

[1] ignore all the mars noise, just consider reusable i.e. cheap rockets and starlink

by financetechbro 4 hours ago

Agreed. OP is crossing some wires. More companies are staying private right now (not by choice). Not necessarily or only the “largest companies”

by getpost 6 hours ago

I don't know anything, but isn't Sam already rich? And, if OpenAI is in a position to capture profits in an expanding AI industry, isn't this the best possible time to lock-in equity for long term gains and trillionaireship?

For many people, sadly, one can never be rich enough. My point is, planning for both short term exit, and long term gains, is essentially the same in this particular situation. What a boon! Nice problem to have!

by mirekrusin 5 hours ago

For some people the question "how much?" has only one answer – "more."

by baking 2 hours ago

He is tied for last place on the most recent Forbes Billionaire list.

https://www.forbes.com/profile/sam-altman/

by gizmo 5 hours ago

Sam was already a billionaire years ago. He is one of SV most prolific investors, with equity in hundreds of startups. He writes big checks as well from time to time, Helion for instance he gave 375 million.

by GolfPopper 4 hours ago

To be clear, Altman invested $375 million in Helion Energy's 2021 funding round. Helion Energy a for-profit fusion energy company of which he is the chairman.

by baking 2 hours ago

He is the Executive Chairman at Helion, which means he keeps a tight grip on that particular purse string.

by andrewinardeer 3 hours ago

My understanding is Altman joined the Three Comma Club only this year not years ago.

Yes, he has made lots of investments over the years as head of YC but not every investment was successful. This was discussed on BBC's podcast 'Good Billionaire Bad Billionaire' recently.

by VirusNewbie 4 hours ago

How? He did not have a large exit to give him a bankroll to invest. Unless you mean from whatever YC shares he received as President, which seems unlikely. Even if he was paid ~25 million in equity per year, did YC shares increase 10x in that time?

by disiplus 3 hours ago

Didn't he own like 10% of Reddit. Here what pg had to say about the guy. --- Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king. If you're Sam Altman, you don't have to be profitable to convey to investors that you'll succeed with or without them. (He wasn't, and he did.) Not everyone has Sam's deal-making ability. I myself don't. But if you don't, you can let the numbers speak for you. --- https://paulgraham.com/fundraising.html

by danga 4 hours ago

He also got weird facial plastic surgery and looks like a wanted mobster now.

by QuantumGood 4 hours ago

His recent statement that AGI may "only be a few thousand days away" is clearly an attempt to find a less quotable way of extending estimates while barely reducing hype. History generally shows that estimates of "more than five years away" are meaningless.

by CurrentB 3 hours ago

"a few thousand days away" feels like such a duplicitous way to phrase something when "a few years" would be more natural to everyone on the planet. It just seems intentionally manipulative and not even trying to hide it. I've never been an anti-ceo type person but something about Altman sketches me out

by renegade-otter 3 hours ago

4000 days is just a few thousand days, but it's [checks notes] over 10 years away.

by QuantumGood an hour ago

He knows as well as anyone about the 5 year hueristic (length too long to be meaningful), so he's trying to say "over 5 years", but still hold onto the sense of "meaningful estimate". My sense about Sam is that he trusts his PR instinct to be usefully manipulative enough that he doesn't think he has to plan it carefully in order for his statements to be maximally opaquely manipulative. (He speaks cleverly and manipulatively without a lot of effort.)

by __MatrixMan__ 8 hours ago

I don't really follow Altman's behavior much, but just in general:

> If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit

If such a thing could exist and was right around the corner, why would you need a company for it? Couldn't the AGI manage itself better than you could? Job's done, time to get a different hobby.

by azinman2 7 hours ago

Let’s say you got AGI, and it approximated a not so bright impulsive 12 year old boy. That would be an insane technological leap, but hardly one you’d want running the show.

AGI doesn’t mean smarter than the best humans.

by richardw 6 hours ago

A 12 year old that doesn’t sleep, eat, or forget, learns anything within seconds, can replicate and work together with fast as light communication and no data loss when doing so. Arrives out of the womb knowing everything the internet knows. Can work on one problem until it gets it right. Doesn’t get stale or old. Only had emotion if it’s useful. Can do math and simulate. Doesn’t get bored. Can theoretically improve its own design and run experiments with aforementioned focus, simulation ability, etc.

What could you do at 12, with half of these advantages? Choose any of them, then give yourself infinite time to use them.

by SoftTalker 5 hours ago

Why do we assume an AGI would not be forgetful or would never get bored?

by richardw 3 hours ago

Because we can change the hardware or software to be what we want. It can write to storage and read it, like we can but faster.

by bad_user 4 hours ago

A 12 year old that doesn't interact with the world.

Many believe that AGI will happen in robots, and not in online services, simply because interacting with the environment might be a prerequisite for developing consciousness.

You mentioned boredom, which is interesting, as boredom may also be a trait of intelligence. An interesting question is if it will want to live at all. Humans have all these pleasure sensors and programming for staying alive and reproducing. The unburdened AGI in your description might not have good reasons to live. Marvin, the depressed robot, might become real.

by cdchn 3 hours ago

>simply because interacting with the environment might be a prerequisite for developing consciousness.

We can't even define what consciousness is yet, let alone whats required to develop it.

by richardw 3 hours ago

I’m not sure, but it’s possible Stephen Hawking would have been fine with becoming digital, assuming he could keep all the traits he valued. He had a pretty low data communication rate, interacted digitally and did more than most humans can. Give him access to anything digital at an high speed and he’d have had a field day. If he could stay off Twitter.

by pstuart 5 hours ago

And then with advancement, reaches teenage maturity and tells everyone to fuck off.

by thordenmark 4 hours ago

As someone who has teenagers right now, I can confirm that this is accurate.

by rktan 4 hours ago

A 12 year old could determine that "AI" is boring and counterproductive for humanity and switch off a computer or data center. Greta Thunberg did similar for the climate, perhaps we need a new child saint who fights "AI".

by pounderstanding 5 hours ago

> AGI doesn’t mean smarter than the best humans.

Technically no, but practically...

12 year old limitations are: A. gets tired, needs sleep B. I/O limited by muscles

Probably there are more, but if 12 year old could talk directly to electric circuits and would not need sleep or even a break, then that 12 year old would be leaps and bounds above the best human in his field of interest.

(Well motivation to finish the task is needed though)

by prng2021 6 hours ago

If it's stuck at 12 year old level intelligence then it's not generally intelligent. 12 years can learn to think like 13 year olds and so on.

by azinman2 5 hours ago

A set of weights siting on your hard disk don’t evolve to get smarter on their own. AGI does however require the ability to learn new skills in new domains; if you kept exposing it I don’t see why it wouldn’t be similar. But fundamental capacities in how good it is a reasoning / planning don’t have to exceed any given human to be considered AGI.

by Triphibian 7 hours ago

All it would want to do is talk about Minecraft and the funny memes they saw.

by __MatrixMan__ 7 hours ago

For an intelligence to be "General" there would have to be no type of intelligence that it did not have access to (even if its capabilities in that domain were limited). The idea that that's what humans have strikes me as the same kind of thinking that led us to believe that earth was in the center of the universe. Surely there are ways of thinking that we have no concept of.

General intelligence would be like an impulsive 12 year old boy who could see 6 spatial dimensions and regarded us as cartoons for only sticking to 3.

by throwaway314155 7 hours ago

Humans can survive in space and on the moon because our intelligence is robust to environments we never encountered (or evolved to encounter). That's "all" general intelligence is meant to refer to. General just means robust to the unknown.

I've seen some use "super" (as in superhuman) intelligence lately to describe what you're getting at.

by __MatrixMan__ 5 hours ago

Super feels to me like it's a difference in degree rather than in kind. Something with intelligences {A, b, c} might be super intelligent compared with {a, b, c}. i.e. more intelligent in domain A.

But if one has {a, b, c} and the other has {b, c, d} neither is more or less intelligent than the other, they just have different capabilities. "Super" is a bit too one-dimensional for the job.

by throwaway314155 3 hours ago

I don't think using set notation is really helping your case here but (I think?) I agree.

by esskay 5 hours ago

What stops it getting more intelegent? That's litterally its primary aim. Its only limit as to how quickly it does that is hardware capacity, and I'd be shocked if it didn't somehow theorise to itself that it can and should expand its knowledge in any way possible.

by echoangle 5 hours ago

Why would the only limit be hardware capacity? Can’t there be an innate limit due to the model size/architecture? Maybe LLMs can’t be superintelligent because they are missing a critical ability which can’t be overcome with any amount of training? It’s not obvious to me that it’s going to get smarter infinitely.

by nobodyandproud 7 hours ago

This is great insight. Science fiction does a great job inspiring technology research, but we forget that real-world accomplishments can be much humbler and still astonishing all the same.

I'm a bit tired of the hype surrounding LLMs, but all the same for very mundane and humbler tasks that require some intelligence modern LLMs manage to surprise me on a daily basis.

But it rarely accomplishes more than what a small collection of humans with some level of expertise can achieve, when asked.

by Nevermark 5 hours ago

Or accomplishments can genuinely be astounding.

Surely, the LLM models we have today are astounding by any measure, relative to just a few years ago.

But pronouncements of how this will lead to utopia, without introducing a major revision of economic arrangements, are completely, and surely intentionally/conveniently (Sam isn't an idiot) misleading.

Is OpenAI creating a class of stock so everyone can share in their gains? If not, then AGI owned by OpenAI will make OpenAI shareholders rich, very much to the degree its AGI eliminates human jobs for itself and other corporations.

How does that, as an economic situation, result in the general population being able to do anything beyond be a customer, assuming they can still make money in some way not taken over by AGI?

Utopia needs an actual plan. Not a concept of a plan.

The latter just keeps people snowed and calm before an historic level rug pull.

by SoftTalker 5 hours ago

Altman's AGI is Musk's full self-driving car.

by lolinder 8 hours ago

If such a thing was right around the corner, the person who controlled it would be the only person left who had any kind of control over their own future.

Why would you sell that?

by __MatrixMan__ 7 hours ago

I'm not a believer in general intelligence myself, all we have are a small pile of specific intelligences. But if it does exists then it would be godlike to us. I can't guess at the motivations of somebody who would want to bootstrap a god, but I doubt that Altman is so strapped for cash that his primary motivator is coming up with something to sell.

by nobodyandproud 7 hours ago

As other comments have mentioned, AGI doesn't mean god-like or super-human intelligence.

For these models today, if we measure the amount of energy expended for training and inference how do humans compare?

by jdiez17 5 hours ago

I did a similar calculation a few weeks ago:

Humans consume about 100W average power (2000 kcal to watt hours/24 hours). So 8 billion people consume ~800 GW. Call it 1 TW. Average world electric power generation is 28000 TWh / (24*365 hours) ~3 TW.

by ben_w 6 hours ago

> For these models today, if we measure the amount of energy expended for training and inference how do humans compare?

My best guess is 120,000 times more for training GPT-4 (based on claim it cost $63 million and that was all electricity at $0.15/kWh and looking only at the human brain and not the whole body).

But also, 4o mini would then be a killowatt hour for a million tokens at inference time, by the same assumptions that's 50 hours or just over one working week of brain energy consumption. A million tokens over 50 hours is 5.5 tokens per second, which sounds about what I expect a human brain to do, but caveat that with me not being a cognitive scientist and what we think we're thinking isn't necessarily what we're actually thinking.

by Zondartul 3 hours ago

If we figure out AGI, that still doesn't mean a singularity. I'm going to speak as though we're on the brink of AI outthinking every human on earth (we are not) but bear with me, I want to make it clear we're not going jobless any time soon.

For starters, we still need the AI (LLMs for now) to be more efficient, i.e. not require a datacenter to train and deploy. Yes, I know there are tiny models you can run on your home pc, but that's comparing a bycicle to a jet.

Second, for an AGI it meaningfully improve itself, it has to be smarter than not just any one person, but the sum total of all people it took to invent it. Until then no single AI can replace our human tech sphere of activity.

As long as there are limits to how smart an AI can get, there are places where humans can contribute economically. If there is ever to be a singularity, it's going to be a slow one, and large human AI vompanies will be part of the process for many decades still.

by FrustratedMonky 7 hours ago

"why would you need a company for it? Couldn't the AGI manage itself better than you could?"

Well, you still have to have the baby, and raise it a little. And wouldn't you still want to be known as the parent of such a bright kid as AGI? Leaving early seems to be cutting down on his legacy, if a legacy was coming.

by karmakaze 6 hours ago

Let's go with the no next gen models, then we'd be looking at an operationalized service. I might be up for paying $20/month for one. Certainly more value than I get from Netflix. All I would want from the service is a simple low-friction UI and continued improvements to keep up with or surpass competitors. They could manage that.

The long-term problem may be access to quality/human-created training data. Especially if the ones that control that data have AI plans of their own. Even then I could see OpenAI providing service to many of them rather than each of them creating their own models.

by from-nibly 6 hours ago

I doubt $20 a month is going to cover it. How much would you.pay before you weren't getting a good deal?

by wkat4242 6 hours ago

The thing is, you can be economical. You don't need a GPT-4 quality model for everything. Some things are just low value where a 3.5 model would do just fine.

I never use the $20 plan but I access everything via API and i spend a couple of dollars per month.

Although lately I have a home server that can do llama 3.1 8b uncensored and that actually works amazingly well.

by changing1999 an hour ago

I use Llama 3.1 8B Instruct 128k at home and that pretty much covers all my LLM needs. Don't see a reason to pay for GPT-4.

by asyx 3 hours ago

What’s your home server specs? I might want to host something like that too but I think I’d need to upgrade.

by wkat4242 44 minutes ago

An old Ryzen CPU (2600 IIRC) and Radeon Pro VII 16GB. Got it new at a really good price.

It works ok but with a large context it can still run out of memory and also gets a lot slower. With small context it's super snappy and surprisingly good. What it is bad at are facts/knowledge but this is not something a LLM is meant to do anyway. OpenWebUI has really good search engine integration which makes it work like perplexity does. That's a better option for knowledge usecases.

by titanomachy 6 hours ago

If we’re talking about inference costs alone (minimal new training) I think it would cover it. I started doing usage-based pricing through openrouter instead of paying monthly subscriptions, and my modest LLM use costs me about $3/month. Unless you think that the API rates are also priced below cost.

by mikeocool 4 hours ago

According the New York Times [1], OpenAI told investors that they're costs are increasing as more people use their product, and they are planning to increase the cost of ChatGPT from $20 to $44 over the next five years. That certainly suggests that they're selling inference at a loss right now.

[1] https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...

by cyanydeez 6 hours ago

Theres never going to be quality data at tge scale they need it.

At best, theres a slow march to incremental improvements that look exactly like how human culture developed knowledge.

And all the downsides will remain, the same way people, despite hundreds of good sourxes of info still prefer garbage.

by jerpint 5 hours ago

Video data is still very much untapped and likely to unlock a step function worth of data. Current image-language models are trained mostly on {image, caption} pairs with a bit of extra fine tuning

by KPGv2 2 hours ago

Do you think it matters that there's orders of magnitude less video (and audio) data than text data?

by luma 4 hours ago

Nobody close the the edge of this tech seems to believe that this is true, and the 4o release suggests synthetic approaches work well.

by wildermuthn 5 hours ago

The only reason Sam would leave OpenAI is if he thought AGI could only be achieved elsewhere, or that AGI was impossible without some other breakthrough in another industry (energy, hardware, etc).

High-intelligence AGI is the last human invention — the holy grail of technology. Nothing could be more ambitious, and if we know anything about Altman, it is that his ambition has no ceiling.

Having said all of that, OpenAI appears to be all in on brute-force AGI and swallowing the bitter lesson that vast and efficient compute is all you need. But they’ve overlooking a massive dataset that all known biological intelligences rely upon: qualia. By definition, qualia exist only within conscious minds. Until we train models on qualia, we’ll be stuck with LLMs that are philosophical zombies — incapable of understanding our world — a world that consists only of qualia.

Building software capable of utilizing qualia requires us to put aside the hard problem of consciousness in favor of mechanical/deterministic theories of consciousness like Attention-Schema Theory (AST). Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.

by Zondartul 3 hours ago

Future NeuroLink collab? Grab the experience of qualia right from the brains of those who do the experiencing.

by woodruffw 5 hours ago

> Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.

I’m pretty sure it means exactly that. Without actually understanding subjective experience, there’s a fundamental doubt akin to the Chinese room. Sweeping that under the carpet and declaring victory doesn’t in fact victory make.

by wildermuthn 4 hours ago

If the universe is material, then we already know with 10-billion percent certainty that some arrangement of matter causes qualia. All we have to do is figure out what arrangements do that.

Ironically, we understand consciousness perfectly. It is literally the only thing we know — conscious experience. We just don’t know, yet, how to replicate it outside of biological reproduction.

by talldayo 5 hours ago

> High-intelligence AGI is the last human invention

Citation?

...or are you just assuming that AGI will be able to solve all of our problems, appropos of nothing but Sam Altman's word? I haven't seen a single credible study suggest that AGI is anything more than a marketing term for vaporware.

by mrmetanoia 4 hours ago

Their marketing hyperbole has cheapened much of the language around AI, so naturally it excites someone who writes like the disciple of the techno-prophets

" High-intelligence AGI is the last human invention" What? I could certainly see all kinds of entertaining arguments for this, but to write it so matter of fact was cringe inducing.

by wildermuthn 4 hours ago

It’s true by definition. If we invent a better-than-all-humans inventor, then human invention will give way. It’s a fairly simple idea, and not one I made up.

by jcgrillo 3 hours ago

> then human invention will give way

What? Would you mind explaining this?

by bitshiftfaced 5 hours ago

> All while hyping AGI more than ever.

Maybe not, since Altman pretty much said they no longer want to think it terms of "how close to AGI?". Iirc, he said they're moving away from that and instead want to move towards describing the process as hitting new specific capabilities incrementally.

by falcor84 4 hours ago

> ... 4o and o1, which are great step changes but nothing the competition can't quickly replicate.

Where did that assertion come from? Has anyone come close to replicating either of these yet (other than possibly Google, who hasn't fully released their thing yet either), let alone "quickly"? I wouldn't be surprised if it's these "sideways" architectural changes actually give OpenAI a deeper moat than just working on larger models.

by dcre 3 hours ago

Claude 3.5 Sonnet is at least as good as 4o.

by falcor84 3 hours ago

Oh, sorry I wasn't clear, I was referring to the advanced (speech-to-speech) voice mode, which for me is the highlight of this "omni" thing, rather than to the "strength" of the model.

by dcre 2 hours ago

I thought that might be the case. On that: even though other big players haven't copied that particular feature, I doubt (though I'm not by any means an expert) that it's anywhere near as hard to replicate as the fundamental model. I do agree that at least product-wise, they're trying to differentiate through these sort of model-adjacent features, but I think these are classic product problems requiring product solutions rather than deep tech innovation. Claude Artifacts, GPT Canvas, speech, editor integration, GitHub integration, live video integration, that kind of stuff.

by stavros 3 hours ago

Much better than 4o, in my experience. I've stopped using 4o almost completely, except for cases where I need good tooling more than I need good performance (PoCs and the like).

by disiplus 3 hours ago

it's better at coding for me, and it looks like because it was trained on more data. i have this problem with a library that i use, and it was trained on python2 version but python3 version is completely different. Python3 versions is relatively new, but out from at leaset 2020 and you would mostly find examples with Python2 version if you googled.

They both produce garbage code in this solution. Claude versions is just 20% less garbage, but still useless. The code mixes those two, even if i specify i want the python3 version or directly specifying a version.

by hintymad 4 hours ago

> They dissolved the safety team.

I still don't get the safety team. Yes, I understand the need for a business to moderate the content they provide and rightly so. But elevating the safety to the level of the survival of humanity over a generative model, I'm not so sure. And even for so-called preventing harmful content, how can an LLM be more dangerous than the access of the books like The Anarchist Cookbook, the pamphlets on how to conduct guerrilla warfare, the training materials of how to do terrorisms, and etc? They are easily accessible on the internet, no?

by lolinder 4 hours ago

Having a dedicated safety team (or as they called it "superalignment") makes sense if and only if you believe that your company is pursuing an actual sci-fi style artificial superintelligence that could put humanity itself at risk [0]:

> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

> How do we ensure AI systems much smarter than humans follow human intent?

This is a question that naturally arises if you are pursuing something that's superhuman, and a question that's pointless if you believe you're likely to get a really nice algorithm for solving certain kinds of problems that were hard to solve before.

Getting rid of the superalignment team showed which version Altman believes is likely.

[0] https://openai.com/index/introducing-superalignment/

by Vespasian 4 hours ago

And importantly it's also a question that would need an answer if the alignment should be with "individual groups" rather than "humanity".

It won't do Sam Altman and friends any good if they are the richest corpses after an unaligned AI goes rogue.

So it would be in their were egoistical self interest to make sure it doesn't.

by Mistletoe 8 hours ago

For people that don't get the references, this graph is so helpful for understanding the world.

https://en.wikipedia.org/wiki/Gartner_hype_cycle

It just keeps happening over and over. I'd say we are at "Negative press begins".

by stainablesteel 4 hours ago

i disagree that it looks like an exit, i think altman is here for the long haul. he's got a great platform and a lot of AI competitors are also preparing for the long term. no one cares about who has the best model for the next year or two, they care who has it for the next 20.

by ForOldHack 6 hours ago

Every single word he says Public is his product.

by BryanLegend 6 hours ago

And he doesn't even use the shift key!

by slashdave 5 hours ago

Actually, his blog is correctly capitalized. Although, you would think he could hire a proper editor, since the em-dashes are not formatted correctly. Baby steps.

by gojomo 4 hours ago

I think you're projecting a short-term cash-out mentality here.

Altman's actions are even more consistent with total confidence & dedication to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever. Plus, a desire to retain more personal control over that outcome – & not just conventional wealth! – than was typical with prior breakthroughs.

by lolinder 4 hours ago

> you're projecting a short-term cash-out mentality

I'm a software engineer comfortably drawing a decent-but-not-FAANG paycheck at an established company with every intention of taking the slow-and-steady path to retirement. I'm not projecting, I promise.

> to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever

Except that OpenAI isn't a faraway leader. Their big news this week was them finally making an effort to catch up to Anthropic's Artifacts. Their best models do only marginally better in the LLM Arena than Claude, Gemini, and even the freely-released Llama 3.1 405B!

Part of why I believe Altman is looking to cash out is that I think he's smart enough to recognize that he has no moat and a very small lead. His efforts to get governments to pull up the ladder have largely failed, so the next logical choice is to exit at peak valuation rather than waiting for investors to recognize that OpenAI is increasingly just one in a crowd.

[0] https://lmarena.ai/

by wslh 5 hours ago

I see it differently. ChatGPT holds a unique position because it has the most important asset in tech: user attention. Building a brand and capturing a significant share of eyeballs is one of the hardest challenges for any company, startup or not. From a business standpoint, the discussion around GPT-5 or AGI seems secondary. What truly matters is the growing number of users paying for ChatGPT, with potential for monetization through ads or other services in the future.

by deepfriedchokes 3 hours ago

The internet has destroyed the value of user attention and brands; everyone has the attention span of a gnat and the loyalty of an addict. They will very quickly move on to the next shiny thing.

by wslh 3 hours ago

My comment doesn't negate your point. I'm emphasizing the business side beyond the moral considerations. If anything, we've seen that brands which contributed to unhealthy habits, like the rise of fast food and sugary drinks, still hold immense value in the market. User loyalty may be fleeting, but businesses can still thrive by leveraging brand strength and market presence, regardless of the moral implications.

by semi-extrinsic 4 hours ago

> What truly matters is the growing number of users paying for ChatGPT, with potential for monetization through ads or other services in the future.

If the end goal is monetization of ChatGPT with ads, it will be enshittified to the same degree as Google searches. If you get to that, what is the benefit of using ChatGPT if it just gives you the same ads and bullshit as Google?

by wslh 4 hours ago

I mentioned ads as just one potential avenue for monetization. My main point is that OpenAI's current market position and strong brand awareness are the real differentiators. They're in a unique spot where they have the user's attention, which offers a range of monetization options beyond just ads. The challenge for them will be to balance monetization with maintaining the user experience that made them successful in the first place.

Also, don't forget the recent Apple partnership [1], a very strong signal of their strategic positioning. Aligning with Apple reinforces their credibility and opens up even more opportunities for innovation and expansion, beyond just monetizing through ads. I just searched through this thread, and it seems the Apple partnership isn't being recognized as a significant achievement under Sam Altman's tenure as CEO, which is surprising given its importance.

[1] https://news.ycombinator.com/item?id=40328927

by bitcharmer 5 hours ago

It's funny and somehow disappointing how a year ago HN jerked off to Altman and down-voted any critical opinions of him. Just shows no social platform is free of strong group-think and mass hysteria.

by Analemma_ 8 hours ago

Yeah, everything from OpenAI in the last year suggests they have nothing left up their sleeve, they know the competition is going to catch up very soon, and they're trying to cash out as fast as possible before the market notices.

(In the Gell-Mann amnesia sense, make sure you take careful note of who was going "OAI has AGI internally!!!" and other such nonsense so you can not pay them any mind in the future)

by m3kw9 7 hours ago

Dissolve the safety team. You just made everyone stop reading the rest of your post by falsely claiming that

by whamlastxmas 7 hours ago

I don’t even know how this fake news started

by lolinder 6 hours ago

Maybe the fact that they actually did dissolve the safety team formerly led by Ilya Sutskever in the aftermath of Altman's coup [0]? I'm genuinely unsure what part of this you're questioning.

[0] https://www.bloomberg.com/news/articles/2024-05-17/openai-di...

by whamlastxmas 5 hours ago

Like literally the third sentence makes it clear it was restructuring team dynamics but not eliminating safety goals or even losing the employees that are safety focused. There are many safety focused people at OpenAI and they never fired or laid off people in a safety team which is what people infer from “dissolve the safety team” when it’s used as a bullet point of criticism

by lolinder 5 hours ago

It's pretty obvious to most of us that "integrating the group more deeply across its research efforts to help the company achieve its safety goals" is the corporate spin for what was actually "disperse the group that thought its job was to keep Altman himself accountable to safety goals and replace them with a group that Altman personally leads".

Eliminating Ilya's team was part of the post-coup restructuring to consolidate Altman's power, and every tribute to "safety" paid since then is spin.

by richk449 3 hours ago

https://www.merriam-webster.com/dictionary/dissolve

Definitions of dissolve:

to cause to disperse or disappear

to separate into component parts

to become dissipated (see DISSIPATE sense 1) or decompose

BREAK UP, DISPERSE

Those seem like pretty accurate descriptions of what happened. Yes, dissolve can also mean something stronger, so perhaps it is fair to call the statement ambiguous. But it isn’t incorrect.

by simpaticoder 8 hours ago

A bit of a "dog bites man" story to note that a CEO of a hot company is hyping the future beyond reason. The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air? It is a sea change to manufacturers and distributors, but there would still be a need for mechanics and engineers to specify the correct parts, in the correct order, and use the parts to good purpose.

It is easy to imagine that the inventor of such a technology would probably start talking about printing entire cars - and if you don't think about it, it makes sense. But if you think about it, there are problems. Making the component of a solution is quite different than composing a solution. LLMs exist in the same conditions. Being able to generate code/text/images is of no use to someone who doesn't know what to do with it. I also think this limitation is a practical, tacit solution to the alignment problem.

by theptip 7 hours ago

This argument simply asserts that the LLMs (or their successor systems including scaffolding) will asymptote somewhere below human capabilities.

It’s possible that this could happen but you need to propose a mechanism and metric for this argument to be taken seriously (and to avoid fooling yourself with moving goalposts). Under what grounds do you assert that the trend line will stop where you claim it will stop?

Yes, if super-human AGI simply never happens then the alignment problem is mostly solved. Seems like wishful thinking to me.

by richardw 5 hours ago

This. It’s far harder to think of reasons that limits will endure, in a world where innovation by inches has always produced improvements. Everyone brings out the current state and technology. Those are transient.

by semi-extrinsic 4 hours ago

WTF? Look at a word processor today, and then one from 1999, and tell me again how technology always keeps improving dramatically.

The standard electrical wall sockets that you use have not really changed since WW2. For load bearing elements in buildings, we don't have anything substantially better today than 100 years ago. There is a huge list of technological items where we've polished out almost every last wrinkle and a 1% gain once a decade is hailed as miraculous.

by jcgrillo 3 hours ago

There is an interesting delusion among web company workers that goes something like “technology progress goes steadily up and to the right” which I think comes from web company managers who constantly prattle on about “innovation”. Your word processor example is a good one, because at some point making changes to the UI just hurts users. So all that empty “innovation” that everyone needs to look busy doing is actually worse than doing nothing at all. All that is a roundabout way to say I think tech workers have some deep need to see their work as somehow contributing to “progress” and “innovation” instead of just being the meaningless undirected spasms of corporate amoebas.

by richardw 2 hours ago

Do you have any points that aren’t about the people you disagree with? Argue the facts. What are the limits that prevent progress on the dimension of replicating human intelligence?

by jcgrillo an hour ago

> Do you have any points that aren’t about the people you disagree with?

Yes...

> Argue the facts.

What?

> What are the limits that prevent progress on the dimension of replicating human intelligence?

I don't work in that field, but as a layman I'd wager the lack of clear technical understanding of what animal intelligence actually is, let alone how it works is the biggest limitation.

by richardw 3 hours ago

So what? Many animals haven’t changed much for millions of years, and that was irrelevant to our emergence. Not everything has to change in equal measure.

There are many reasons for all those things not to change. Limits abound. We discovered that getting taller or faster isn’t “better”, all we needed is smarter. Intelligence is different. It applies to everything else. You can lose a limb or eyesight and still be incredibly capable. The intelligence is what makes us able to handle all the other limits and change the world even though MS Word hasn’t changed much.

We are now applying a lot of our intelligence to inventing another one. The architecture won’t stay the same, the limits won’t endure. People keep trying and it’s infinitely harder to imagine reasons why progress will stop. Just choose any limit and defend it.

by __MatrixMan__ 8 hours ago

I dunno, magically fabricating a part is fundamentally different than magically deciding where and when to do so.

AI can magically decide where to put small pieces of code. Its not a leap to imagine that it will later be good at knowing where to put large pieces of code.

I don't think it'll get there any time soon, but the boundary is less crisp than your metaphor makes it.

by rsynnott 7 hours ago

> AI can magically decide where to put small pieces of code.

Magically, but not particularly correctly.

by refulgentis 7 hours ago

> I dunno, magically fabricating a part is fundamentally different than magically deciding where and when to do so.

Right.

It sounds to me like you agree and are repeating the comment but are framing as disagreeable.

I'm sure I'm missing something.

by NitpickLawyer 7 hours ago

The person you replied to suggests that the analogy of a 3d printer building a part does not hold, as LLM-based coding systems are able to both "print" some code, and decide where and when to do so.

I tend to agree with them. What people seem to miss about LLM coding systems, IMO:

a) deciding on the capabilities of an LLM to code after a brief browser session with 4o/claude is comparable to waking up a coder in the middle of the night, and having them recite the perfect code right then and there. So a lot of people interact with it that way, decide it's meh, and write it off.

b) most people haven't tinkered with with systems that incorporate more of the tools human developers use day to day. They'd be surprised of what even small, local models can do.

c) LLMs seem perfectly capable to always add another layer of abstraction on top of whatever "thing" they get good at. Good at summaries? Cool, now abstract that for memory. Good at q/a? Cool, now abstract that over document parsing for search. Good at coding? Cool, now abstract that over software architecture.

d) Most people haven't seen any RL-based coding systems yet. That's fun.

----

Now, of course the article is perfectly reasonable, and we shouldn't take what any CEO says at face value. But I think the pessimism, especially in coding, is also misplaced, and will ultimately be proven wrong.

by Filligree 5 hours ago

(D) includes me, and does sound fun. Got any references?

by 12_throw_away 3 hours ago

> Got any references?

This is a good question, and I worry you won't get a response. Here is a pattern I've observed very frequently in the LLM space, with much more frequency than random chance would suggest:

  Bob: "Oh, of course it didn't work for you, you just need to use an ANA (amazing new acronym) model"
  Alice: "Oh, that's great, where can I see how ANA works? How do I use it?"
  ** Bob has left the chat **
by NitpickLawyer 2 hours ago

Saw a private demo, felt as giddy as I felt when deepmind showed Mario footage. No idea when it'll be out.

by barrell 8 hours ago

I would say LLMs are much less “produce a perfect part from nothing” and more “cut down a tree and get a the part you ask for, for a random model of car”

by bee_rider 7 hours ago

Car analogies are a little fraught because, I mean, in some cases you can even die if you put just the wrong parts in your car, and they are pretty complex mechanically.

If you had a printer that could print semi-random mechanical parts, using it to make a car would be obviously dumb, right? Maybe you would use it to make, like, a roller blade wheel, or some other simple component that can be easily checked.

by btown 7 hours ago

> specify the correct parts, in the correct order, and use the parts to good purpose

While the attention-based mechanisms of the current generation of LLMs still have a long way to go (and may not be the correct architecture) to achieve requisite levels of spatial reasoning (and of "practical" experience with how different shapes are used in reality) to actually, say, design a motor vehicle from first principles... that future is far more tangible than ever, with more access to synthetic data and optimized compute than ever before.

What's unclear is whether OpenAI will be able to recruit and retain the talent necessary to be the ones to get there; even if it is able to raise an order of magnitude more than competitors, that's no guarantee of success. My guess would be that some of the decisions that have led to the loss of much senior talent will slow their progress in the long run. Time will tell!

by ryandrake 8 hours ago

> How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air?

The invention would never see the light of day. If someone were to invent Star Trek replicators, they'd be buried along with their invention. Best Case it would be quickly captured by the ownership class and only be allowed to be used by officially blessed manufacturing companies, and not any individuals. They will have learned their lesson from AI and what it does to scarcity. Western [correction: all of] society is hopelessly locked into and dependent on manufacturing scarcity, and the idea that people have to pay for things. The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.

by d0gsg0w00f 7 hours ago

I don't know. Today it's extremely expensive to start new manufacturing competitors. You have to go and ask for tons of capital merely to play the game and likely lose. Anyone with that much capital is probably going to be skeptical of some upstart and consult with established industry leaders. This would be the opportunity for those leaders to step in and take the tech for themselves.

So to solve this problem you need billions to burn on gambles. I guess that's how we ended up with VC's.

by robertlagrant 7 hours ago

> Western society is hopelessly locked into and dependent on manufacturing scarcity, and the idea that people have to pay for things.

How do you reconcile that with the fact that Western society has invented, improved, and supplied many of the things we lament that other countries don't have (and those countries also lament it - it's not just our own Stockholm Syndrome).

by ryandrake 6 hours ago

Most of what gets invented either 1. relies on its scarcity to extract value or 2. if it's not naturally scarce, it gets locked behind patents, copyrights, cartels, regulations, and license agreements between corporations, in order to keep the means of its production out of the hands of common people. Without that scarcity and ability to profit from it, it wouldn't be invented.

by CoastalCoder 7 hours ago

I'm curious if this is true.

Are there specific historical examples of this that come to mind?

by marcosdumay 7 hours ago

There are plenty of examples of good things being unavailable due to market manipulation and corrupt government.

But I don't know of anything nearly as extreme as destroying an entire invention. Those tend to stick around.

by rsynnott 7 hours ago

The closest real-life thing (and probably what a lot of believers in this particular conspiracy theory are drawing on directly) is probably the Phoebus lightbulb cartel: https://en.wikipedia.org/wiki/Phoebus_cartel

It’s rather hard to imagine even something like that (and it’s pretty limited in scope compared to the grand conspiracy above) working today, though; the EC would definitely stomp on it, and even the sleepy FTC would probably bestir itself for something so blatant.

by dgoodell 7 hours ago

And that’s not really the same thing. Did someone invent the LED bulb in 1920 and that cartel crushed it? Not really.

In reality, the biggest problem was they had no incentive to invest in new lighting technology research, although they had the money to do so. It takes a lot of effort to develop a new technology, and significantly more to make it practical and affordable.

I think the story of the development of the blue LED which led to modern LED lighting is more illustrative of the real obstacles of technological development.

Companies/managers don't want to invest in R&D bc it’s too uncertain and they typically are more interested in the short term.

And it’s hard for someone without deep technical knowledge to identify a realistic worthwhile technical idea from a bad one. So they focus on what they can understand and what they can quantify ().

And even technical people can fail to properly evaluate ideas that are even slightly outside their area of expertise (or even sometimes the ones that are within it )

by marcosdumay 7 hours ago

It only worked at the time because it benefited the public at large.

by pulvinar 7 hours ago

You're asking for historical examples of great inventions that were hidden from history. I only know of those great inventions that I've hidden...

There are a number of counterexamples though. Henry Ford, etc.

by l33t7332273 7 hours ago

Just wondering if you’ve actually hidden inventions you would actually consider to be great?

by tjpnz 7 hours ago

>The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.

Then there would be a violent revolution which wrestles it out of their hands. The benefits of such a technology would be immediately obvious to the layman and he would not allow it to be hoarded by a select few.

by robertlagrant 7 hours ago

Only if someone can make money from it. If the people who have it can't do that (or won't) then it won't happen. E.g. there's no private industry for nuclear bombs selling to any bidder in the world. But anything else you just need to let people make companies and that'll get it out there, in a thousand shades of value to match all the possible demands.

by addcn 7 hours ago

Printed this out and pasted it into my journal. Going to come back to it in a few years. This touches on something important I can’t quite put into words yet. Some fundamental piece of consciousness that is hard to replicate - desire maybe

by plaidfuji 4 hours ago

Desire is a big part of it! Right now LLMs just respond. What if you prompted an LLM, “your goal is to gain access to this person’s bank account. Here is their phone number. Message with them until you can confirm successful access.”

Learning how to get what you want is a fundamental skill you start learning from infancy.

by surfingdino 7 hours ago

Not enough plastics, glass, or metal in the air to make it happen. You need a scrapyard. Actually, that's how the LLMs treat knowledge. They run around like Wall-E grabbing bits at random and assembling them in a haphazard way to quickly give you something that looks like the thing you asked for.

by kylecazar 7 hours ago

Your post is a good summarization of what I believe as well.

But what's interesting when I speak to laymen is that the hype in the general public seems specifically centered on the composite solution that is ChatGPT. That's what they consider 'AI'. That specific conversational format in a web browser, as a complete product. That is the manifestation of AI they believe everyone thinks could become dangerous.

They don't consider the LLM API's as components of a series of new products, because they don't understand the architecture and business models of these things. They just think of ChatGPT and UI prompts (or it's competitor's versions of the same).

by bee_rider 7 hours ago

I think people think* of ChatGPT not as the web UI, but as some mysterious, possibly thinking, machine which sits behind the web UI. That is, it is clear that there’s “something” behind the curtain, and there’s some concern maybe that it could get out, but there isn’t really clarity on where the thing stops and the curtain begins, or anything like that. This is more magical, but probably less wrong, than just thinking of it as the prompt UI.

*(which is always a risky way of looking at it, because who the hell am I? Neither somebody in the AI field, nor completely naive toward programming, so I might be in some weird knows-enough-to-be-dangerous-not-enough-to-be-useful valley of misunderstanding. I think this describes a lot of us here, fwiw)

by wkat4242 6 hours ago

An LLM is not going to go skynet. It's not smart enough and can't take initiative.

An AGI however could. Once it reaches IQs of more than say 500 it would become very hard to control it.

by lotsoweiners 5 hours ago

Power cord?

by wkat4242 an hour ago

It won't be that simple because it will anticipate your every move once it gets intelligent enough.

by bee_rider 5 hours ago

It is possible that some hypothetical super intelligent future AI will do something very clever like hide in a coding assistant and then distribute bits of itself around the planet in the source code of all of our stoplights and medical equipment.

However I think it’s more likely that it will LARP as “I’m an emotionally supportive beautiful AI lady, please download me to your phone, don’t take out the battery or I die!”

by wkat4242 an hour ago

> It is possible that some hypothetical super intelligent future AI will do something very clever like hide in a coding assistant and then distribute bits of itself around the planet in the source code of all of our stoplights and medical equipment.

That was part of the plot of person of interest. A really interesting show, it started as a basic "monster of the week" show but near the end it became a much more interesting plot.

Although most of the human characters were extremely one-dimensional. Especially Jim Caviezel's who was just a grumpy super soldier in every episode. It was kinda funny because they called him "the man in the suit" in the series and there was indeed little else to identify his character. The others were hardly better :(

But the AI storyline I found very interesting.

by ctoth 5 hours ago

Here are two explanations of why cutting the power won't help you once you reach that state. In short, you have already lost.

[0]: No Physical Substrate, No Problem https://slatestarcodex.com/2015/04/07/no-physical-substrate-...

[1]: It Looks Like You’re Trying To Take Over The World https://gwern.net/fiction/clippy

by throwaway5752 7 hours ago

That's a really good comment and insight, but understandably I think it is aimed at this forum and a technical audience. It landed well for me in terms of the near team impact of LLMs and other models. But outside this forum, I think our field is in a crisis from being very substantially oversold and undersold at the same time.

We have a very limited ability to define human intelligence, so it is almost impossible to know how near or far we are from simulating it. Everyone here knows how much a challenge it is to match average human cognitive abilities in some areas, and human brains run at 20 watts. There are people in power that may take technologists and technology executives at their word and move very large amounts of capital on promises that cannot be fulfilled. There was already an AI Winter 50 years ago, and there are extremely unethical figures in technology right now who can ruin the reputation of our field for a generation.

On the other hand, we have very large numbers of people around the world on the wrong end of a large and increasing wealth gap. Many of those people are just hanging on doing jobs that are actually threatened by AI. They know this, they fear this, and of course they will fight for their and their families lifestyles. This is a setup for large scale violence and instability. If there isn't a policy plan right now, AI will be suffering populist blowback.

Aside from those things, it looks like Sam has lost it. The recent stories about the TSMC meeting, https://news.ycombinator.com/item?id=41668824, was a huge problem. Asking for $7T shows a staggering lack of grounding in reality and how people, businesses, and supply chains work. I wasn't in the room and I don't know if he really sounded like a "podcasting bro", but to make an ask of companies like that with their own capital is insulting to them. There are potential dangers of applying this technology; there are dangers of overpromising the benefits; and neither of them are well served when relatively important people in related industries thing there is a credibility problem in AI.

by cheschire 7 hours ago

LLMs aren't really simulating intelligence so much as echoing intelligence.

The problem is when the hype machine causes the echoes to replace the original intelligence that spawned the echoes, and eventually those echoes fade into background noise and we have to rebuild the original human intelligence again.

by wkat4242 6 hours ago
by throwaway5752 6 hours ago

I appreciate this, that is why I said, "LLMs and other models". Knowing the probability relations between words, tokens, or concepts/though vectors is important, and can be supplemented by smaller embedded special purpose models/inference engines and domain knowledge in those areas.

As I said, it is overhyped in some areas and underhyped in others.

by olliem36 8 hours ago

Great analogy! I'll borrow this when explaining my thoughts on how LLMs pose to replace software engineers.

by rapind 8 hours ago

I tried replacing myself (coding hat) and it was pretty underwhelming. Some day maybe.

by bboygravity 7 hours ago

That comparison would make sense if the company was open source, non-profit, promised to make all designs available for free, took Elon Musk's money and then broke all promises includidng the one in its name and started competing with Musk.

by lawn 6 hours ago

> The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

I think the insight is that some people truly believe that LLMs would be exactly as groundbreaking as a magical 3D printer that prints out any part for free.

And they're pumping AI madly because of this belief.

by djjfksbxn 7 hours ago

> A bit of a "dog bites man" story to note that a CEO of a hot company is hyping the future beyond reason.

Why is it in your worldview a CEO “has to lie”?

Are you incapable of imagining one where a CEO is honest?

> The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.

I’ll allow it if you stipulate that randomly and without reason when I ask for an alternator it prints me toy dinosaur.

> It is easy to imagine that the inventor of such a technology

As if the unethical sociopath TFA is about is any kind of, let alone the, inventor of genai.

> Being able to generate code/text/images is of no use to someone who doesn't know what to do with it.

Again, conveniently omitting the technology’s ever present failure modes.

by Null-Set 7 hours ago

A CEO has a fiduciary duty to lie.

by Capricorn2481 7 hours ago

A CEO that is the CEO of a publicly traded company who has over-leveraged their position and is now locked into chasing growth even if the company they're running suffers has a fiduciary duty to lie. There are plenty of CEOs not in this position.

I'm not talking about Altman in particular, I'm just annoyed with this constant spam on HN about how we all need to turn a blind eye to snake oil salesman because "that's just how it's supposed to be for a startup."

For a forum that complains about how money ruins everything, from the Unity scandal to OSS projects being sponsored and "tainted" by "evil companies," it's shocking to see how often golden boy executives are excused. I wish people had this energy for the much smaller companies trying to be profitable by raising subscriptions once in the 20 years they've been running, but instead they are treated like they burned a church. It truly is an elitist system.

by slashdave 5 hours ago

Perhaps someone should have considered what problem they are trying to solve before spending vast resources on the "solution"?

by vbezhenar 8 hours ago

There are plenty of plastic parts in cars and you can print them with 3D printer. I don't think that anything really changed because of that.

by edgyquant 8 hours ago

Because those are irrelevant to the point being made in the GP

by paulyy_y 8 hours ago

GP? What happened to the good old "OP"?

by rendall 8 hours ago

OP is original poster (or post), to which we are not referring.

by rendall 8 hours ago

I think that is GP's point.

by austinkhale 8 hours ago

There are legit criticisms of Sam Altman that can be levied but none of them are in this article. This is just reductive nonsense.

The arguments are essentially:

1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.

2. Sam _only_ has a record as a deal maker, not a physicist.

3. AI can sometimes do bad things & utilizes a lot of energy.

I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.

by BearOso 8 hours ago

I think LLM technology, not necessarily all of CNN, has plateaued. We've used up all the human discourse, so there's nothing to train it on.

It's like fossil fuels. They took billions of years to create and centuries to consume. We can't just create more.

Another problem is that the data sets are becoming contaminated, creating a reinforcement cycle that makes LLMs trained on more recent data worse.

My thoughts are that it won't get any better with this method of just brute-forcing data into a model like everyone's been doing. There needs to be some significant scientific innovations. But all anybody is doing is throwing money at copying the major players and applying some distinguishing flavor.

by theptip 6 hours ago

What data are you using to back up this belief?

Progress on benchmarks continues to improve (see GPT-o1).

The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.

o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.

by KaiserPro 4 hours ago

> Progress on benchmarks continues to improve (see GPT-o1).

thats not evidence of a step change.

> The big guys are building synthetic training sets

Yes, that helps to pre-train models, but its not a replacement for real data.

> not worried about running out of data.

they totally are. The more data, the more expensive it is to train. Exponentially more expensive.

> o1 shows that you can also throw more inference compute

I suspect that its not actually just compute, its changes to training and model design.

by senko 7 hours ago

Actually, the sources we had (everything scraped from the internet) turns out to be pretty bad.

Imagine not going to school and instead learning everything from random blog posts or reddit comments. You could do it if you read a lot, but it's clearly suboptimal.

That's why OpenAI, and probably every other serious AI company, is investing huge amounts in generating (proprietary) datasets.

by chuckledog 5 hours ago

GitHub, especially filtered by starred repos, is a pretty high quality dataset.

by askafriend 5 hours ago

Any thoughts on synthetic data?

by rocho 2 hours ago

See "AI models collapse when trained on recursively generated data"

https://www.nature.com/articles/s41586-024-07566-y

by slashdave 5 hours ago

Dead end. You cannot create information out of nothing.

by Lerc an hour ago

You're a creationist then?

by Filligree 4 hours ago

Which is why thought experiments are always useless.

by croes 4 hours ago

If Sam claimed we'll fix climate with the help of AI, he is either a liar or a fool.

Our problem isn't technology, it's humans.

Unless he suggests mass indoctrination per AI AI won't fix anything.

by _cs2017_ 5 hours ago

To avoid disappointment, just think of the mass news media as a (shitty) LLM. It may occasionally produce an article that on the surface seems to be decently thought out, but it's only because the author accidentally picked a particularly good source to regurgitate. Ultimately, they just type some plausible sentences without knowing or caring about the quality.

by deepsquirrelnet 8 hours ago

> At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future

I think the job of a CEO is not to tell you the truth, and the truth is probably more often than not, the opposite.

What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures? What is OpenAI worth then?

by vbezhenar 8 hours ago

I'm paying my subscription and I'd probably pay 5x more if they would charge it to keep access to the current service. ChatGPT 4o is incredibly useful for me today, regardless of whether GPT5 will be good or not. I'm not sure how does that reflect OpenAI cost, but those company costs are just bubbles of air anyway.

by bhy 6 hours ago

But note that the price they can charge is based on market supply and demand. If Claude is priced at $20, ChatGPT won’t be able to ask $100.

by peteforde 5 hours ago

If OpenAI sent out an email today informing me that to maintain access to the current 4o model I will have to pay $1000 a year, and that it would go up another $500 next year... it would still be well worth it to me.

by rocho 2 hours ago

To you maybe. But if Claude or any other competitor with similar features and performance keeps a lower price, most users will migrate there.

by coffeefirst 8 hours ago

Would you be willing to share how you're using it?

I keep hearing from people who find these enormous benefits from LLMs. I've been liking them as a search engine (especially finding things buried in bad documentation), but can't seem to find the life-changing part.

by vbezhenar 8 hours ago

1. Search engine replacement. I'm using it for many queries I asked Google before. I still use Google, but less often.

2. To break procrastination loops. For example I often can't name a particular variable, because I can see few alternatives and I don't like all of them. Nowadays I just ask ChatGPT and often proceed with his opinion.

3. Navigating less known technologies. For example my Python knowledge is limited and I don't really use it often, so I don't want to spend time to better learn it. ChatGPT is just perfect for that kind of tasks, because I know what I want to get, I just miss some syntax nuances and I can quickly check the result. Another example is jq, it's very useful tool, but its syntax is arcane and I can't remember it even after years of occasional tinkering with it. ChatGPT builds jq programs like a super-human, I just show example JSON and what I want to get.

4. Not ChatGPT, but I think Copilot is based on GPT4, and I use Copilot very often as a smart autocomplete. I didn't really adopt it as a code writing tool, I'm very strict at code I produce, but it still helps a lot with repetitive fragments. Things I had to spend 10-20 minutes before, construction regexps or using editor macroses, I can now do with Copilot in 10-20 seconds. For languages like Golang where I must write `if err != nil` after every line, it also helps not to become crazy.

May be I didn't formulate my thoughts properly. It's not anything irreplaceable and I didn't become 10x programmer. But those tools are very nice and absolutely worth every penny I paid for it. It's like Intellij Idea. I can write Java in notepad.exe, but I'm happy to pay $100/year to Jetbrains and write Java in Idea.

by YeGoblynQueenne 6 hours ago

>> 3. Navigating less known technologies. For example my Python knowledge is limited and I don't really use it often, so I don't want to spend time to better learn it.

Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels. It's an excuse to not learn something, to keep yourself less knowledgeable and skillful than you could really be. Why stunt yourself? Knowledge is power.

See it this way: anyone can use ChatGPT but not everyone knows Python well, so you 'll never be able to use ChatGPT to compete with someone who knows Python well. You + limited knowledge of Python + ChatGPT << someone + good knowledge of Python + ChatGPT.

Experiences like the one you relay in your comment makes me think using LLMs for coding in particular is betting on short-term gains for a disproportionately large long-term cost. You can move faster now, but there's a limit to what you can do that way and you'll never be able to escape it.

by jdiez17 5 hours ago

> and you'll never be able to escape it.

Slow down, cowboy. Getting a LLM to generate code for you that is immediately useful and doesn't require you to think too hard about it can stunt learning, sure, but even just reading it and slowly getting familiar with how the code works and how it relates to your original task is helpful.

I learned programming by looking at examples of code that did similar things to what I wanted, re-typing it, and modifying it a bit to suit my needs. From that point of view it's not that different.

I've seen a couple of cases first hand of people with no prior experience with programming learn a bit by asking ChatGPT to automate some web scraping tasks or spreadsheet manipulation.

> You + limited knowledge of Python + ChatGPT << someone + good knowledge of Python + ChatGPT.

Substract ChatGPT from both sides and you have a rather obvious statement.

> Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels.

How did you learn to ride a bicycle?

by peteforde 5 hours ago

Respectfully, this is an incredibly weak take.

You are making a lot of assumptions about someone's ability to learn with AND without assistance, while also making rather sci-fi leaps about our brain somehow being able to differentiate between learning that has somehow been tainted by the tendrils of ML overlords.

The models and the user interface around them absolutely will continue to improve far faster than any one person's ability to obtain subject mastery in a field.

by YeGoblynQueenne 4 hours ago

Just to clarify, I didn't say anything about the OP's "ability to learn". I know nothing about that and can't tell anything about it from their comment. I also didn't say anything about how our brain works, or about "the tendrils of ML overlords".

If you want to have a debate, I'm all for it, but if you're going to go around imagining things that I may have said in another timeline then I don't see what's the point of that.

by peteforde 5 hours ago

I can't even summarize how much GPT4x has helped me teach myself engineering skills over the past two years. It can help me accomplish highly specific and nuanced things in CAD, plan interactions between component parts (which it helped me decide) on PCBs that it helps me to lay out, figure out how to optimize everything from switching regulators to preparing for EM certification.

And I could say this about just about every domain of my life. I've trained myself to ask it about everything that poses a question or a challenge, from creating recipes to caring for my indoor Japanese maple tree to preparing for difficult conversations and negotiations.

The idea of "just" using it to compose emails or search for things seems frustrating to me, even reading about it. It's actually very hard for me to capture all of this in a way that doesn't sound like I'm insulting the folks who aren't there yet.

I'm not blindly accepting everything it says. I am highly technical and I think competent enough to understand when I need to push back against obvious or likely hallucinations. I would never hand its plans to a contractor and say "build this". It's more like having an extra, incredibly intuitive person who just happens to contain the sum of most human knowledge at the table, for $20 a month.

I honestly don't understand how the folks reading HN don't intuitively and passionately lean into that. It's a freaking superpower.

by SoftTalker 5 hours ago

> I honestly don't understand how the folks reading HN don't intuitively and passionately lean into that. It's a freaking superpower.

It is difficult to get a man to understand something when his salary depends upon his not understanding it.

Many of us here would see our jobs eliminated by a sufficiently powerful AI, perhaps some have already experienced it. You might as well. If you use AI so much, what value do you really provide and how much longer before the AI can surpass you at that?

by peteforde 3 hours ago

Only time will tell. In the meantime, I am not losing any sleep.

There's a lot of people in technical roles who chose to study programming and work at tech companies because it seemed like it would pay more than other roles. My own calculation is that the tech-but-could-have-just-as-easily-been-a-lawyer cohort will be the first to find themselves replaced. Is that a revealing a bias? Absolutely.

Actual hackers, in the true sense, will have no trouble finding ways to continue to be useful and hopefully well compensated.

by henry2023 7 hours ago

I’ve got a local llama 3.2 3B running on my macOS. I can query it for recipes, autocomplete obvious code (this is the only thing GitHub Copilot was useful for). And answer simple questions when provided with a little bit of context.

All with much lower latency than an HTTP request to a random place, knowing that my data can’t be used to trading anything, and it’s free.

It’s absolutely insane this is the real world now.

by mark_l_watson 5 hours ago

+1 for sure running LLMs 3.2 3B is super fast and useful. I have been pushing it for local RAG and code completion also. I bought a 32B memory Mac six months ago, which I now regret because the small local models are now extremely useful and run fine on old 8B memory Macs, and support all the fun experiments I want to do.

by nicolas_t 6 hours ago

What do you use to interface with llama for autocomplete? and what editor do you use?

Not wanting my data to be sent to random places is what has limited my use of tools like copilot (so I'd only use it very sparingly after thinking if sending the data would be a breach of nda or not)

by exitb 8 hours ago

That being said, 4o is functionally in the same league as Claude, which makes this a whole different story. One in which the moat is already gone.

by cableshaft 7 hours ago

Yeah I stopped my subscription during 4 thinking it wasn't super useful, but 4o does seem to handle programming logic pretty well. Better than me if given the same timeframe (of seconds) at least.

It's helped me stay productive on days when my brain just really doesn't want to come up with a function that does some annoying fairly complex bit of logic and I'd probably waste a couple hours getting it working.

Before I'd throw something like that at it, and it'd give me something confidently that was totally broken, and trying to go back and forth to fix it was a waste of my time.

Now I get something that works pretty well but maybe I just need to tweak something a bit because I didn't give it enough context or quite go over all the inconsistencies and exceptions in the business logic given by the requirements (also I can't actually use it on client machines so I have to type it manually to and from another machine, so I'm not copy pasting anything so I try to get away with typing less).

I'm not typing anything sensitive, btw, this is stuff you might find on Stack Overflow but more convoluted, like "search this with this exception and this exception because that's the business requirement and by these properties but then go deeper into this property that has a submenu that also needs to be included and provide a flatlist but group it by this and transform it so it fits this new data type and sort it by this unless this other property has this value" type of junk.

by raincole 8 hours ago

> What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures?

Sam Altman himself doesn't know whether it's the case. Nobody knows. It's the natural of R&D. If you can tell whether an architecture works or not with 100% confidence it's not cutting edge.

by deepsquirrelnet 5 hours ago

We got to ~gpt4 by scaling model parameters, and then several more companies did too.

That’s dead. OpenAI knows that much. There will be more, but they aren’t going to report that we’re doing incremental advances until there’s a significant breakthrough. They need to stay afloat and say what it takes to try and bridge the gap.

by barrell 8 hours ago

Sam Altman has explicitly said the next model will be an even bigger jump than between 3 and 4.

I think that was before 4o? I know 4o-mini and o1 for sure have come out since he said that

by MOARDONGZPLZ 7 hours ago

> Sam Altman has explicitly said the next model will be an even bigger jump than between 3 and 4.

You say unironically on an article stating that Sam Altman cannot be taken at his word in a string of comments about him hyping up the next thing so he can exit strongly on the backs of the next greater fool. But seriously, I’m sure GPT-5 will be the greatest leap in history (OpenAI equity holder here).

by patcon 8 hours ago

> Nobody knows.

I suspect it's a little different. AI models are still made of math and geometric structures. Like mathematicians, researchers are developing intuitions about where the future opportunities and constraints might be. It's just highly abstract, and until someone writes the beautiful Nautilus Mag article that helps a normie see the landscape they're navigating, we outsiders see it as total magic and unknowable.

But Altman has direct access to the folks intuiting through it (likely not validated intuitions, but still insight)

That's not to say I believe him. Motivations are very tangled and meta here

by ThinkBeat 8 hours ago

If a CEO lies all the time, and investors make investments because of it, will that not turn out to become a problem for the CEO?

by 42lux 8 hours ago

Well lets take Tesla, FSD and Elon as an example were a judge just ruled[0] that it's normal corporate puffery and not lies.

[0] https://norcalrecord.com/stories/664710402-judge-tosses-clas...

by Zigurd 8 hours ago

The 5th circuit's loose attitude about deception is why all Elon's Xs live in Texas, or will soon.

It's not trivial. "Mere puffery" has netted Tesla about $1B in FSD revenue.

by edgyquant 8 hours ago

Elons lies are way different. He mostly is just optimistic about time frames. Since Sam Altman and ChatGPT became mainstream the narrative is about the literal end of the world and OpenAI, and their influencer army, has made doomerism their entire marketing strategy.

by kevin_thibedeau 7 hours ago

When you're taking in money it's not blind optimism. They sold gullible people a feature that was promised to be delivered in the near future. Ten years later it's clearly all a fraud.

by moogly 7 hours ago

> He mostly is just optimistic about time frames

Perhaps if you have a selective memory. There's plenty of collections of straight-up set-in-stone falsehoods on the internet to find, if you're interested.

by butterfly42069 8 hours ago

It seems that continuously promising something is X weeks/months/years away is seen as optimism and belief, not blatant disregard for facts.

I'm sure the defence is always, "but if we just had a bit more money, we would've got it done"

by Zigurd 8 hours ago

Elizabeth Holmes would like a referral to lawyer who could make that case.

by comfysocks 5 hours ago

Holmes, Balwani and co lied about their current technology, which is a step beyond making overly optimistic future projections. They claimed to have their own technology when they were actually using their competitor’s machines with diluted blood samples.

by butterfly42069 7 hours ago

I think her end product was too clearly defined for anything she was doing to be passed as progress. I don't think you could make the case for her.

You can make a case that partial self driving is a route to FSD, the ISS is en route to Mars and (you can make a potentially slightly less compelling case) LLMs are on the way to AGI.

No one can make a case that lady was en route to the tech she promised

by slashdave 5 hours ago

There is a line. Forging documents is one of them.

by throwintothesea 7 hours ago

Elizabeth Holmes screwed up by telling the wrong lies and also having the wrong chromosomes. She should've called some cave rescuers pedophiles, then maybe people would've respected her.

by blindriver 8 hours ago

That’s not what happened to Theranos and Elisabeth Holmes

by butterfly42069 7 hours ago

Well I didn't say everyone could pull it off successfully

I think the more abstract and less defined the end goal is, the easier it is to make everything look like progress.

The blood testing lady was a pass/fail really. FSD/AGI are things where you can make anything look like a milestone. Same with SpaceX going to Mars.

by kloop 7 hours ago

They made medical claims. That's a very bad idea if you aren't 100% sure

by slashdave 5 hours ago

Forget that. Pasting logos of big-named companies on forged documents. I mean, seriously?

by Agentus 8 hours ago

Well i think this conversation chain has played out multiple times all the way spanning back to at least Ben Edison. Often times nothing is certain in a business where you need to take a chance trying to bring an imagined idea into fruition with millions of investor money.

Ceos are more often come from marketing backgrounds than other disciplines for the very reason they have to sell stakeholders, employees, investors on the possibilities. If a ceos myth making turns out to be a lie 50 to 80 percent of the time then hes still a success as with Edison, Musk, Jobs, and now Altman.

But i think AI ceos seem to be imagining and peddling wilder fancier myths than the average. If AI technology pans out then i dont feel theyre unwarranted. I think theres enough justification but im biased and have been doing AI for 10 years.

To ur question, If a ceos lies dont accidently turn true eventually as with the case of Holmes then yes its a big problem.

by throwintothesea 7 hours ago

Come now, it's only lying when poor people do it, don'tcha know.

by rubyfan 8 hours ago

depends which investors lose money

by angulardragon03 8 hours ago

This is the Tesla Autopilot playbook, which seems to continue to work decently for that particular CEO

by tux3 8 hours ago

I'm not sure about the decency of it.

I remember all the articles praising the facade of a super-genius. It's a stark contrast to today.

People write about his trouble or his latest outburst like they would a neighborh's troubled kid. There's very little decency in watching people sink like that.

What's left after reality reassesserts itself, after the distortion field is gone? Mostly slow decline. Never to reach that high again.

by ben_w 8 hours ago

The difference between Tesla and Nikola is that some false claims matter more than others: https://www.npr.org/2022/10/14/1129248846/nikola-founder-ele...

Given Altman seems to be extremely vague about exact timelines and mainly gives vibes, he's probably doing fine. Especially as half the stuff he says is, essentially, to lower expectations rather than to raise them.

by almatabata 6 hours ago

Funnily I came across an interview from around 5 years ago where he straight up admitted that he had no clue how to generate a return on investment back then (see https://www.youtube.com/watch?v=TzcJlKg2Rc0&t=1920s)

"We have no current plans to make revenue."

"We have no idea how we may one day generate revenue."

"We have made a soft promise to investors that once we've built a general intelligence system, basically we will ask it to figure out a way to generate an investment return for you."

The fact he has no clue how to generate revenue with an AGI without asking it, shows his lack of imagination.

by slashdave 5 hours ago

Well, I mean the actual answer is raise more money from investors. But it is better to leave things up to the imagination of the listener.

by flappyeagle 5 hours ago

Or his honesty?

by slater 5 hours ago

Or maybe both

by Aeolun 8 hours ago

If you need someone to tell you the truth you don’t need a CEO.

What you need a CEO for is to sell you (and your investors) a vision.

by hn72774 8 hours ago

Without the truth, a vision is a hallucination.

It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.

In reality, this vision you speak of is more like the blind leading the blind.

by squarefoot 8 hours ago

> It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.

If so many people wouldn't fall for claims without any proof, religions themselves would not exist.

by ergonaught 7 hours ago

All knowledge is incomplete and partial, which is another way of saying "wrong", therefore all "vision" is hallucination. This discussion would not be happening without hallucinators with a crowd sharing their delusion. Humanity generally doesn't find the actual truth sufficiently engaging to accomplish much beyond the needs of immediate survival.

by llamaimperative 8 hours ago

This is silly. Delusion kills more companies than facing reality with honesty does.

by slashdave 5 hours ago

Actually, CEOs, and other board members, are supposed to be held to certain standards. Specifically, honesty and integrity. Some charters explicitly include working for the public good. Let's not forget that and not get desensitized to certain behaviors.

by switch007 4 hours ago

That ship has long, long sailed

by latexr 8 hours ago

What a shitty world we constructed for ourselves, where the highest positions of power with the highest monetary rewards depend on being the biggest liar. And it’s casually mentioned and even defended as if that’s in any way acceptable.

https://www.newyorker.com/cartoon/a16995

by mrbungie 8 hours ago

A typical CEO's job is to guard and enforce a narrative. Great ones also work at adding to the narrative.

But it is always about the narrative.

by mola 6 hours ago

I thought it was about directing the execution of the company business. Defining strategy and being a driving force behind it.

But you are right, we live in a post truth influencer driven world. It's all about the narrative.

by antirez 8 hours ago

Not possible since Claude is effectively GPT5 level in most tasks (EDIT: coding is not one of them). OpenAI lost the lead months ago. Altman talking about AGI (may take decades or years, nobody knows) is just the usual crazy Musk-style CEO thing that is totally safe to ignore. What is interesting is the incredible steady progresses of LLMs so far.

by diggan 8 hours ago

> Claude is effectively GPT5 level

Which model? Sonnet 3.5? I subscribed to Claude for while to test Sonnet/Opus, but never got them to work as well as GPT-4o or o1-preview. Mostly tried it out for coding help (Rust and Python mainly).

Definitely didn't see any "leap" compared to what OpenAI/ChatGPT offers today.

by antirez 8 hours ago

Both, depending on the use case. Unfortunately Claude is better in almost every regard than ChatGPT but fo coding so far. So you would not notice improvements if you test it only for code. Where it shines is understanding complex things and ideas in long text, and the context window is AFAIK 2x than ChatGPT.

by diggan 7 hours ago

Tried it for other things too, but then they just seem the same (to me). Maybe I'll give it another try, if it has improved since last time (2-3 months maybe?). Thanks!

by naveen99 8 hours ago

The future is not binary, it’s a probability.

by meiraleal 8 hours ago

> What if gpt5 is vaporware

OpenAI decides what they call gpt5. They are waiting for a breakthrough that would make people "wow!". That's not even very difficult and there are multiple paths. One is a much smarter gpt4 which is what most people expect but another one is a real good voice-to-voice or video-to-video feature that works seamlessly the same way chatgpt was the first chatbot that made people interested.

by deepsquirrelnet 7 hours ago

It’s more than that. Because of what they’ve said publicly and already demonstrated in the 3->4 succession, they can’t release something incremental as gpt5.

Otherwise people might get the impression that we’re already at a point of diminishing returns on transformer architectures. With half a dozen other companies on their heels and suspiciously nobody significantly ahead anymore, it’s substantially harder to justify their recent valuation.

by hdivider 4 hours ago

In my view we should also stop taking the Great Technoking at his word and move away from lionizing this old well-moneyed elite in general.

Real technological progress in the 21st century is more capital-intensive than before. It also usually requires more diverse talent.

Yet the breakthroughs we can make in this half-century can be far greater than any before: commercial-grade fusion power (where Lawrence Livermore National Lab currently leads, thanks to AI[1]), quantum computing, spintronics, twistronics, low-cost room-temperature superconductors, advanced materials, advanced manufacturing, nanotechnology.

Thus, it's much more about the many, not the one. Multi-stakeholder. Multi-person. Often led by one technology leader, sure, but this one person must uplift and be accountable to the many. Otherwise we get the OpenAI story, and end-justifies-the-means type of groupthink wrt. those who worship the technoking.

[1]: https://www.llnl.gov/article/49911/high-performance-computin...

by 1vuio0pswjnm7 an hour ago

"At a high enough level of abstraction, Altman's entire job is to keep us all fixated on an imagined AI future so we don't get too caught up in the underwhelming details of the present."

Old tactic.

The project that would eventually became Microsoft Corp. was founded on it. Gates told Ed Roberts the inventor of the first personal computer that he had a programming for it. He had no such programming langugage.

Gates proceeded to espouse "vapourware" for the decades. Arguably Microsoft and its disciples are still doing so today.

Will the tactic ever stop working. Who knows.

Focus on the future that no one can predict, not the present that anyone can describe.

by thruway516 7 hours ago

"Altman is no physicist. He is a serial entrepreneur, and quite clearly a talented one"

Not sure the record supports that if you remove OpenAi which is a work-in-progress and supposedly not going too great at the moment. A talented 'tech whisperer' maybe?

by wslh 3 hours ago

Sam Altman is only 39 years old. Like it or not, it would be a fallacy to assume he's shown everything he's capable of. He likely has much more to contribute in his lifetime.

by _hyn3 2 hours ago

> The technologies never quite work out like the Altmans of the world promise, but the stories keep regulators and regular people sidelined while the entrepreneurs, engineers, and investors build empires. (The Atlantic recently entered a corporate partnership with OpenAI.)

Hilarious.

by rubyfan 8 hours ago

I sort of wish there was a filter for my life that would ignore everything AI (stories about AI, people talking about AI and of course content generated by AI).

The world has become a less trustworthy place for a lot of reasons and AI is only making it worse, not better.

by keiferski 7 hours ago

Sounds like a good startup idea. Just be sure to use AI for this filter so you can get funded.

by wslh 3 hours ago

Do you use LLMs?

by rand0mx1 8 hours ago
by rubyfan 4 hours ago

Thanks! Now I just need to filter my LinkedIn feed and the corporate AI leaders at work.

by nabla9 6 hours ago

The field is extremely research oriented. You can't stay on top with good engineering and incremental development and refining.

Google just paid over $2.4 billion to get Noam Shazeer back in the company to work with Gemini AI. Google has the deepest pool of AI researchers. Microsoft and Facebook are not far behind.

OpenAI is losing researchers, they have maybe 1-2 years until they become Microsoft subsidiary.

by xbar 4 hours ago

You make it sound like AI researchers are in short supply.

by danielmarkbruce 4 hours ago

For almost all jobs X, "really competent X" are in exceedingly short supply.

by xbar 3 hours ago

That is certainly true. But the market began searching in earnest for "really competent X" in this field only about 2 years ago. Before that, it was a decidedly opt-in affair..

by danielmarkbruce an hour ago

Are you saying the average AI researcher isn't good? I'm not arguing with you, I just can't parse what you are saying here.

by xbar 43 minutes ago

I'm saying that both the volume and upper bound are increasing, not becoming constrained.

by nabla9 3 hours ago

Good AI researchers and research teams are.

The whole LLM era was started with the 2017 "Attention is all you need" paper by Google Brain/Research and nobody has done anything same magnitude since.

Noam Shazeer was one of the authors.

by vasilipupkin 8 hours ago

Don't listen to David Karpfs of the world. Did he predict chat gpt? if you asked him in 2018, he would have said AI will never write a story

now you can use AI to easily write the type of articles he produces and he's pissed.

by throwgfgfd25 7 hours ago

> now you can use AI to easily write the type of articles he produces and he's pissed.

You really cannot.

by vasilipupkin 2 hours ago

Really? Are you sure ? His article can basically be summed up as don’t believe AI hype from Sama. It’s not particularly well written, he’s no Nabokov. ChatGPT bangs out stuff like this effortlessly. Here, I did it for you https://chatgpt.com/share/67019a6c-453c-8006-88aa-6f32435492...

by throwgfgfd25 2 hours ago

Oh dear me. I can't argue with you if this satisfies you.

by vasilipupkin an hour ago

Are you telling me his articke is much better written ?

Come on.

by ComplexSystems 7 hours ago

You can even take an enormous sample of articles that he has written and fine tune a model on it, so that it really sounds like him.

by throwgfgfd25 6 hours ago

And it still won't produce the type of articles he produces. Because at the very least he is capable of writing new articles from something the LLM doesn't have: his brain.

Seriously. This is just the parrot thing again. The fact that AI proponents confuse the form of words with authorial intent is mindbending to me.

Wouldn't have confused Magritte, I think.

by SpicyLemonZest 5 hours ago

I’m not confused, I just disagree. I don’t think that authorial intent is something fundamentally different than text prediction.

When I’m writing out a comment, there’s no muse in my head singing the words to me. I have a model of who I am and what I believe - if I weren’t religious I might say I am that model - and I type things out by picking the words which that guy would say in response to the input I read.

(The model isn’t a transformer-based LLM, of course.)

by javed6542 6 hours ago

It would cost around $5 and about 5 hours of work to prove you wrong...

by throwgfgfd25 6 hours ago

You clearly, clearly do not understand what I am saying. But sure, waste your time and money making a parrot that, unlike the author it mimics, is incapable of introspection, reflection, intellectual evolution or simply changing its mind.

Words are words. Writers are writers. Writers are not words.

ETA: consider what would actually be necessary to prove me wrong. And when you hear back from David Karpf about his willingness to take part in that experiment, write a blog post about it and any results, post it to HN.

I am sure people here will happily suggest topics for the articles. I, for example, would love to hear what your hypothetical ChatKarpf has to say about influences from his childhood that David Karpf has never written about, or things he believed at age five that aren't true and how that affects his writing now.

Do you see what I mean? These aren't even particularly forced examples: writers draw on private stuff, internal thoughts, internal contradictions, all the time, consciously and unconsciously.

by askafriend 5 hours ago

You articulate this position well. I've tried to convey something similar and it's tough to find the words to explain to people. I really like this phrase:

"Words are words. Writers are writers. Writers are not words."

I'm very bullish on AI/LLMs but I think we do need to have a better shared understanding of what they are and what they aren't. I think there's a lot of confusion around this.

by throwgfgfd25 2 hours ago

> I really like this phrase:

Thank you. I don't think it really explains the distinction, of course. It just makes it clear there necessarily must be one, and it can't be wished away by discussions of larger training sets, more token context, or whatever. It never will be wished away.

by angarg12 3 hours ago

> Remember, these technologies already have a track record. The world can and should evaluate them, and the people building them, based on their results and their effects, not solely on their supposed potential.

But that's not how the market works.

by razodactyl 3 hours ago

I think we should consider companies that create or own their own hardware, the ability to generate cheap electricity and the ability of neural networks to continuously learn.

I still "feel the AGI". I think Ben Goertzel'a recent talk on ML Street Talk was quite grounded / too much hype clouds judgement.

In all honesty, once the hype dies down, even if AGI/ASI is a thing - we're still going to be heads down back to work as usual so why not enjoy the ride?

Covid was a great eye-opener, we dream big but in reality people jump over each other for... toilet paper... gotta love that Gaussian curve of IQ right?

by mppm 7 hours ago

Around the time of the board coup and Sam's 7-trillion media tour, there were multiple, at the time somewhat credible, rumors of major breakthroughs at Open AI -- GPT5, Q*, and possibly another unnamed project with wow-factor. However, almost a year has passed, and OpenAI has only made incremental improvements public.

So my question is: What does the AI rumor mill say about that? Was all that just hype-building, or is OpenAI holding back some major trump card for when they become a for-profit entity?

by ilrwbwrkhv 7 hours ago

All hype. Remember when the whole "oh we are so scared to release this model" happen back in the day and it was worse than GPT3?

All of these doing the rounds of foreign governments and acting like artificial general intelligence is just around the corner is what got him this fundraising round today. It's all just games.

by minimaxir 3 hours ago

Q* turned out to be GPT-o1, which was objectively overhyped.

by javed6542 6 hours ago

probably just waiting for the election to be over before releasing the next gen models

by thelastgallon 8 hours ago

"Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot." - Sam Altman, https://ia.samaltman.com/

Reality: AI needs unheard amounts of energy. This will make climate significantly worse.

by jt2190 7 hours ago

> AI needs unheard amounts of energy…

… and it always will? It seems terribly limiting to stop exploring the potential of this technology because it’s not perfect right now. Energy consumption of AI models does not feel like an unsolvable problem, just a difficult one.

by YeGoblynQueenne 6 hours ago

AI, i.e. deep learning, consumes huge amounts of energy because it has to train on gigantic amounts of data and the training algorithms are borderline efficient. So to stop using huge amounts of energy someone would have to come up with a way to get the same results without relying on deep learning.

In other words all we need is a new technology revolution like the deep learning revolution, except one centered around a radically new approach that overcomes every limitation of deep learning.

Now: how likely do you think this is to happen any time soon? Note that the industry and most of academia have bet the bank on deep learning partly because they think that prospect is extremely unlikely.

by jt2190 5 hours ago

This feels like a “fixed mindset” about energy, i.e. the way we generate and allocate energy today can not change. Of all the “wasteful” things we do with energy AI is not the top of my (very subjective) list of things to shut down.

by YeGoblynQueenne 4 hours ago

No what I'm saying is that for the way we use energy to train neural nets can change in principle but nobody knows how to do it.

by xbar 4 hours ago

I think you should study the tragedy of the commons to understand why it always will.

by KaiserPro 4 hours ago

Energy isnt a common. It costs.

Also the tragedy of the commons is based on a number of flawed assumptions on how commons work.

by meroes 8 hours ago

Wow that quote is absolutely crazy. “Discovery of all of physics”. I think that’s the worst puffery/lie I’ve ever heard by a CEO. Science requires experiments so the next LLM will have to be able to design CERN+++ level experiments down to the smallest detail. But that’s not even the hard part, the hard part is the actual energy requirements, which are literally astronomical. So it’s going to either have to discover a new method of energy generation along the way or something else crazy. The true barrier for physics is energy right now. But that’s just the next level of physics, that’s not ALL of it.

Edit: Also why are you getting downvoted...

by moogly 7 hours ago

I can't imagine anyone writing that paragraph of his with a straight face. The chuckling must've lasted a week.

by golergka 8 hours ago

So far it seems that AIs appetite for energy might finally pushing western countries back to nuclear, which would make climate significantly better.

by UncleMeat 8 hours ago

A world where we produce N watt-hours of energy without nuclear plants and a world where we produce N+K watt-hours of energy with K watt-hours coming from nuclear has exactly the same effect on the climate.

by kibwen 8 hours ago

Unfortunately no, this is not how it works.

The relative quantity of power provided by nuclear (or renewables, for that matter) is NOT our current problem. The problem is the absolute quantity of power that is provided by fossil fuels. If that number does not decrease, then it does not matter how much nuclear or renewables you bring online. And nuclear is not cheaper than fossil fuels (even if you remove all regulation, and even if you build them at scale), so it won't economically incentivize taking fossil fuel plants offline.

by ccppurcell 8 hours ago

Nuclear cannot "make the climate better" but can perhaps slow the destruction down, only if it replaces fossil fuels, not if it is added on top due to increased energy consumption. In that case it's at best neutral.

by collingreen 8 hours ago

Nuclear and everything we were using before is probably not better than just everything we were using before. Hopefully we can continue to reduce high emission or otherwise damaging power production even while power requirements grow.

by golergka 8 hours ago

Power means making things and providing services that people want, that make their lives better. Which is a good thing. We need more power, not less.

by layer8 6 hours ago

Almost all power consumed ends up as thermal energy in the environment.

by Schiendelman 6 hours ago

That would be fine if we weren't preventing it from radiating out with greenhouse gases.

by g-b-r 8 hours ago

I guess that global warming is a good thing, then

by ben_w 8 hours ago

Only if they build the reactors then go bankrupt leaving the rectors around for everyone else. Likewise if they build renewables to power the data centres.

by layer8 6 hours ago

Almost all energy consumed for computation turns into heat. Increasing energy consumption therefore doesn’t help the climate, in particular if you don’t source it from incoming heat from the sun (photovoltaics or wind energy).

by bambax 7 hours ago

> Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics. He predicts that we may have an all-powerful superintelligence “in a few thousand days.”

It seems fair to say Altman has completed his Musk transformation. Some might argue it's inevitable. And indeed Bill Gates' books in the 90s made a lot of wild promises. But nothing that egregious.

by yndoendo 7 hours ago

Both of them remind me of Elizabeth Holmes. She ran Theranos through a promise of lies long enough they turned into fraud.

So far Musk has been pushing the lies out continually to try and prevent any possible exposure to fraud. Like "Getting to Mars will save humanity" or the latest "We will never reach Mars unless Trump is president again". Then again, self driving cars are just around the corner, as stated in 2014 with a fraudulently staged video of their technology, that they just need to work the bugs out.

Altman is making wild clams too with how Machine Learning will slow and reverse climate change while proving that the technology needs vast more resources, specially in power consumption, just to be market viable for business and personal usage.

All three play off people's emotions to repress critical thinking. They are no different than the lying preachers, I can heal you with a touch of my hand, that use religion to gain power and wealth. The three above are just replacing religion with technology.

by n2d4 5 hours ago

The difference between those is that Musk and Altman make wrong predictions about the future; Holmes with Theranos made wrong statements about the present.

One of them is illegal, the other isn't.

by whamlastxmas 6 hours ago

This is some really out there thinking, and I think you need to do some basic fact checking bc some of this just isn’t true

by theptip 7 hours ago

The better reason to stop taking Altman at his word is on the subject of OpenAI building AGI “for the benefit of humanity”.

Now that he’s restructuring the company to be a normal for-profit corp, with a handsome equity award for him, we should assume the normal monopoly-grabbing that we see from the other tech giants.

If the dividend is simply going to the shareholder (and Altman personally) we should be much more skeptical about baking these APIs into the fabric of our society.

The article is asinine; of course a tech CEO is going to paint a picture of the BHAG, the outcome that we get if we hit a home run. That is their job, and the structure of a growth company, to swing for giant wins. Pay attention to what happens if they hit. A miss is boring; some VCs lose some money and nothing much changes.

by KaoruAoiShiho 8 hours ago

Nobody is taking Sam Altman at his word lol, these ideas about intelligence have been believed for a long time in the tech world and the guy is just the best at monetizing them. People are pursuing this path because of a general conviction in these ideas of themselves, I guess to people like Atlantic writers Sam Altman is the first time they've encountered them but it really has nothing to do with Sam Altman.

by danielmarkbruce 4 hours ago

100% this. He isn't even saying anything novel (I mean that in a good way).

On top of that, the advance in models for language and physical simulation based models (for protein prediction and weather forecasting as examples) has been so rapid and unexpected that even folks who were previously very skeptical of "AI" are believers - it ain't because Sam Altman is up there talking a lot. I went from AI skeptic to zealot in about 18 months, and I'm in good company.

by lesuorac 4 hours ago

People are taking Altman at his word.

He was literally invited to congress to speak about AI safety. Sure, perhaps people that have a longer memory of the tech world don't trust him. That's actually not a lot of people. A lot of people just aren't following tech (like my in-laws).

by Capricorn2481 6 hours ago

> Nobody is taking Sam Altman at his word lol

ITT: People taking Sam at his word.

by KaiserPro 4 hours ago

Someone is buying enough of his bullshit to invest a few billion into openAI.

The problem is, when it pops, which it will, it'll fuck the economy.

by twodave 5 hours ago

The amount of comparisons I see between some theoretical AGI and something more akin to a science fiction like Jane from the Ender saga or the talking head from That Hideous Strength is I guess not surprising. But in both of those cases the only way to make the plot work was to make the AI literally an other-worldly being.

I am personally not sold on AGI being possible. We might be able to make some poor imitation of it, and maybe an LLM is the closest we get, but to me it smacks of “man attempts to create life in order to spite his creator.” I think the result of those kinds of efforts will end more like That Hideous Strength (in disaster).

by melenaboija 8 hours ago

It is weird that one of the most valued markets (openai, microsoft investments, nvidia gpus, ...) is based on a stack that is available to anyone that can pay for the resources to train the models and that in my opinion still has to deliver to the expectations that have been created around it.

Not saying it is a bubble but something seems imbalanced here.

by jasode 8 hours ago

>one of the most valued markets ... is based on a stack that is available to anyone

The sophisticated investors are not betting on future increasing valuations based on current LLMs or the next incremental iterations of it. That's a "static" perspective based on what outsiders currently see as a specific product or tech stack.

Instead, you have to believe in a "dynamic" landscape where OpenAI the organization of employees can build future groundbreaking models that are not LLMs but other AI architectures and products entirely. The so-called "moat" in this thinking would be the "OpenAI team to keep inventing new ideas beyond LLM". The moat is not the LLM itself.

Yes, if everyone focuses LLMs, it does look like Meta's free Llama models will render OpenAI worthless. (E.g. famous memo : https://www.google.com/search?q=We+have+no+Moat%2C+and+Neith...)

As an analogy, imagine that in the 1980s, Microsoft's IPO and valuation looks irrational since "writing programming code on the Intel x86 stack" is not a big secret. That stock analysis would then logically continue saying "Anybody can write x86 software such as Lotus, Borland, etc." But the lesson learned was that the moat was never the "Intel x86 stack"; the moat was really the whole Microsoft team.

That said, if OpenAI doesn't have any future amazing ideas, their valuation will crash.

by silvestrov 7 hours ago

I'd say that Microsoft's moat was the copyright law and ability to bully the hardware companies with exclusive distribution contracts.

Writing a new DOS (or Windows 3) from scratch is something a lot of developers could do.

They just couldn't do it legally.

And thus it was easy to bully Compaq and others into only distributing PCs with DOS/Windows installed. For some time you even had to pay the Microsoft fee when you wanted a PC with Linux installed.

by Schiendelman 6 hours ago

If this was their most important moat for you, what is their moat now that it is gone?

by immibis an hour ago

They don't have one, which is why they're struggling to maintain market share. They run solely on brand loyalty, and slightly on competing with AWS and GCP to sell server hosting to people with more money than sense.

by melenaboija 7 hours ago

I agree in most of what you said. The main problem for me is that I dont see LLMs are as solid foundations to create a company as technology progress from the 80s.

Im 42 though and already feeling old to understand the future lol

by throwaway42668 8 hours ago

It's okay to say it. It's a bubble.

It was just the next in line to be inflated after crypto.

by superluserdo 8 hours ago

I wouldn't write it off as a bubble, since that usually implies little to no underlying worth. Even if no future technical progress is made, it has still taken a permanent and growing chunk of the use case for conventional web search, which is an $X00bn business.

by thegeomaster 8 hours ago

A bubble doesn't necessarily imply no underlying worth. The dot-com bubble hit legendary proportions, and the same underlying technology (the Internet) now underpins the whole civilization. There is clearly something there, but a bubble has inflated the expectations beyond reason, and the deflation will not be kind on any player still left playing (in the sense of AI winter), not even the actually-valuable companies that found profitable niches.

by KaiserPro 4 hours ago

I mean OpenAI is a bubble, if it pops, its big enough to take the rest of tech with it.

by throwaway42668 4 hours ago

Alternatively, it might make capital available to other things.

It's at least theoretically possible that all the liquidity and leverage in the top of the market could tire itself of chasing the next tulip mania.

For instance, $6 Billion could have gone into climate tech instead of ElizaX.

My problem with these dumb hype cycles is all the other stuff that gets starved in their wake.

by rpgbr 5 hours ago

I’ll never understand how so many smart people haven’t realize that the biggest “hallucination” produced by AI was this Sam Altman.

by paradox460 5 hours ago

He's been grifting long before the current AI hype

He went from a failed startup to president of yc to ultra wealthy investor in the span of about a decade. That's sus

by bhouston 7 hours ago

Sam Allan has to be a promoter and true believer. It is his job to do that and he does have new tech that didn’t exist before and it is game changing.

The issue is more that the company is hemorrhaging talent, and doesn’t have a competitive moat.

But luckily this doesn’t affect most of us, rather it will only possibly harm his investors if it doesn’t work out.

If he continues to have access to resources and can hire well and the core tech can progress to new heights, he will likely be okay.

by AndrewKemendo 7 hours ago

OpenAI cant be working on AGI because they have no arc for production robotics controllers

AGI cannot exist in a box that you can control. We figured that out 20 years ago.

Could they start that? Sure theoretically. However they would have to massively pivot and nobody at OAI are robotics experts

by mark_l_watson 5 hours ago

I appreciate everything that OpenAI has done, the science of modeling and the expertise in productization.

But, but, but… their drama, or Altman’s drama is now too much for me, personally.

With a lot of reluctance I just stopped doing the $20/month subscription. The advanced voice mode is lots of fun to demo to people, and o1 models are cool, but I am fine just using multiple models for chat on Abacus.AI and Meta, an excellent service, and paid for APIs from Google, Mistral, Groq, and OpenAI (and of course local models).

I hope I don’t sound petty, but I just wanted to reduce their paid subscriber numbers by -1.

by yumraj 5 hours ago

Open AI’s AGI is like Tesla’s completely automated self driving.

So close, yet so far. And, both help the respective CEOs in hyping the respective companies.

by wnevets 4 hours ago

He just needs a paltry trillion dollars to make this AI thing happen. Stop being so short sighted.

by throwintothesea 7 hours ago

The Gang Learns That Normal People Don't Take Sam Altman Seriously

by rsynnott 7 hours ago

I mean, in general, if you’re taking CEOs at their word, and particularly CEOs of tech companies at their word, you’re gonna have a bad time. Tech companies, and their CEOs, predict all manner of grandiose nonsense all the time. Very little of it comes to pass, but through the miracle of cognitive biases some people do end up filtering out the stuff that doesn’t happen and declaring them visionary.

by hnadhdthrow123 8 hours ago

Will Human ego, greed, selfishness lead to our destruction? (AI or not)

https://news.ycombinator.com/item?id=35364833

by m2024 8 hours ago

Hopefully, with as little collateral damage to the remaining life on this planet.

by _davide_ 2 hours ago

Did I ever? :')

by flenserboy 8 hours ago

that ship sailed a long time ago

by neuroelectron an hour ago

I'm surprised nobody noticed the elephant in the room, the fact that ChatGPT has a very hard woke slant. That said, -o1 has gotten a lot better but it's not as uncensored and unbiased as GPT-3 was when it was first released. For a while GPT-4 was very clearly biased for the left and U.S. Democrats particularly.

Now keep in mind that this is going to be the default option for a lot of forums and social media for automated moderation. Reddit is already using it a lot and now a lot of the front page is clearly feedback farming for OpenAI. What I'm getting at is we're moving towards a future where only a certain type of dialog will be allowed on most social media and Sam Altman and his sponsors get to decide what that looks like.

by latexr 8 hours ago

Distorting the old Chinese proverb, “The best time to stop taking Sam Altman at his word was the first time he opened his mouth. The second best time is now”. We’ve known he’s a scammer for a long time.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

by stonethrowaway 8 hours ago

I would love to read a solid exposé of Worldcoin, but I don’t think I will get that from Buzzfeed or from The Atlantic. Both seemed to be agenda driven and hot headed. I’d like a more impartial breakdown ala AP-style news reporting.

by latexr 6 hours ago

> I don’t think I will get that from Buzzfeed

BuzzFeed News is not BuzzFeed. They were a serious news website¹ staffed by multiple investigative journalists, including a Pulitzer winner heading that division. They received plenty of awards and recognition.² It is indeed a shame they shared a name with BuzzFeed and that no doubt didn’t help, but it does not detract from their work.

> or from The Atlantic.

There was no Atlantic link. The other source was MIT Technology Review.

> I’d like a more impartial breakdown ala AP-style news reporting.

The Associated Press did report on it³, and the focus was on the privacy implications too. The other time they reported on it⁴ was to announce Spain banned it for privacy concerns.

¹ https://en.wikipedia.org/wiki/BuzzFeed_News

² https://en.wikipedia.org/wiki/BuzzFeed_News#Awards_and_recog...

³ https://apnews.com/article/worldcoin-cryptocurrency-sam-altm...

https://apnews.com/article/worldcoin-spain-eyeballs-privacy-...

by stonethrowaway 4 hours ago

Thank you. I’ll take a look. The Atlantic link I’m referring to is the actual HN article link of the OP, not of the GP.

by rmltn 7 hours ago

One of the above links is from the MIT Technology Review ...

by 0x1ceb00da 7 hours ago
by latexr 5 hours ago

Technically true, but also doesn’t advance the conversation in any way. Eventually no one will exist or care about anything, but until then everyone will have to live with the decisions and apathy of those who came before.

https://www.newyorker.com/cartoon/a16995

by cowmix 6 hours ago

I keep thinking about Sam Altman’s March ’23 interview on Lex Fridman’s podcast—this was after GPT-4’s release and before he was ousted as CEO. Two things he said really stuck with me:

First, he mentioned wishing he was more into AI. While I appreciate the honesty, it was pretty off-putting. Here’s the CEO of a company building arguably the most consequential technology of our time, and he’s expressing apathy? That bugs me. Sure, having a dispassionate leader might have its advantages, but overall, his lack of enthusiasm left a bad taste in my mouth. Why IS he the CEO then?

Second, he talked about going on a “world tour” to meet ChatGPT users and get their feedback. He actually mentioned meeting them in pubs, etc. That just sounded like complete BS. It felt like politician-level insincerity—I highly doubt he’s spoken with any end-users in a meaningful way.

And one more thing: Altman being a well-known ‘prepper’ doesn’t sit well with me. No offense to preppers, but it gives me the impression he’s not entirely invested in civilization’s long-term prospects. Fine for a private citizen, but not exactly reassuring for the guy leading an organization that could accelerate its collapse.

by Schiendelman 6 hours ago

Hi there!

I've done a huge amount of political organizing in my life, for common good - influencing governments to build tens of billions of dollars worth of electric rail infrastructure.

I'm also a big prepper. It's important to understand that stigmatizing prepping is very dangerous - specifically to those who reject it.

Whether it's a gas main break, a forest fire, an earthquake, or a sci-fi story, encouraging people to become resilient to disaster is incredibly beneficial for society as a whole, and very necessary for individuals. The vast, vast majority of people who do it are benefiting their entire community by doing so. Even, as much as I'm sure I'd dislike him if I met him, Sam Altman. Him being a prepper is good for us, at least indirectly, and possibly directly.

Just look at the stories in NC right now - people who were ready to clear their own roads, people taking in others because they have months of food.

Be careful not to ascribe values to behaviors like you're doing.

by cowmix 5 hours ago

I agree that prepping itself is completely fine, and people should be prepared for natural disasters, civil unrest, or whatever scenarios they’re most comfortable with. Building resilience is beneficial for both individuals and communities, and I can see how it plays an important role, especially in situations like the ones you mentioned in NC.

My issue, though, is with someone like Sam Altman—a leader of an organization that could potentially accelerate the downfall of civilization—being so deeply invested in prepping. Altman isn’t just a regular guy preparing for emergencies; he’s an incredibly wealthy individual who has openly discussed stockpiling machine guns and setting up private land he can retreat to at a moment’s notice. It’s that level of preparation, combined with his position at the helm of one of the most consequential tech companies, that doesn’t sit well with me. It feels like he’s hedging against the very future his company might be shaping.

by s1artibartfast 5 hours ago

>It feels like he’s hedging against the very future his company might be shaping.

I dont think the prepping can really be taken as evidence anything nefarious. Prepping simply means someone thinks there is a risk with hedging against, even if they are strongly opposed to that outcome.

I think you see many of the rich prepping because they can, but It says little about their desire for catastrophic events.

Prepping for a hurricane doesn't mean you want it to destroy your neighborhood.

by cowmix 5 hours ago

I don’t think ultra-wealthy preppers want catastrophic events to happen, and I agree that prepping in itself isn’t nefarious. My concern is more about the mindset it can encourage. When someone like Altman—who has significant influence over the future of technology—starts focusing on a solid “Plan B,” it might lead them to take more risks, consciously or unconsciously. Having what they believe is a safe fallback could make them more comfortable pushing boundaries in ways that could accelerate instability. It’s not about wanting disaster, but rather how preparing for one might subtly shift decision-making. For instance, “AI safety team.. who needs it? amirite?!”

by AmigoCharlie an hour ago

Your concern is very clear and very appropriate. Hironically, I feel that those who seem to not understand what you're implying, even if they are open to prepping, in an Apocalypse wouldn't fare that well: as the basic requirement for survival, more than any prepping, is and will remain wisdom.

by Schiendelman 5 hours ago

Every tech CEO (and honestly almost every wealthy person) is doing the same thing. You're mistaking correlation for causation.

by GolfPopper 5 hours ago

There's a difference between resilience and being prepared for the unexpected on one hand (a go-bag (for sudden travel, not Mad Max), on- and off-site backups of data & physical documents, a couple weeks of food & water, an emergency expenses account separate from savings, plus physical currency) and, on the other hand, being under the delusion that hiding in a super-stocked bunker is any sort of acceptable answer to the possible collapse of civilization.

And Altman is definitely in the latter camp with, "But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."[1]

That a guy who says the above, and also says that AI may be an existential threat to humanity, also runs the world's most prominent AI company is disturbing.

1. https://futurism.com/the-byte/openai-ceo-survivalist-prepper

by s1artibartfast 5 hours ago

Wouldn't you prefer someone fearful of the apocalypse and who takes it seriously running such a company over someone who is ambivalent or doesn't consider it a risk?

by GolfPopper 5 hours ago

He doesn't seem fearful of the apocalypse. He seems to consider preparing for it to be a fun hobby.

by s1artibartfast 4 hours ago

That's not my take. I think prepping is a pretty strong indication of concern. That's not to say you can't enjoy preparing in addition to concern.

by whamlastxmas 5 hours ago

I had a prepper phase too and I’m as leftist and pro human as they come. It’s just fun to organize and plan and has nothing to do with alex jones selling me on the frogs turning gay. And if there’s ever a natural disaster (like I just lost water for a week due to a hurricane) it’s nice to have things like 55 gallons of fresh water on standby.

by est 8 hours ago

sama is best match with today's LLM because of the "scaling law" like Zuckerberg described. Everyone is burning cash to race to the end, but the billion dollar question is, what is the end for transformer based LLM? Is there an end at all?

by vbezhenar 8 hours ago

The end is super-human reasoning along with super-human intuition, based on humanity knowledge.

by plaidfuji 7 hours ago

Sure, and the end of biotech is perfect control of the human body from pre-birth until death, but innovation has many bottlenecks. I would be very surprised if the bottlenecks to LLM performance are compute and model architecture. My guess is it’s the data.

by meroes 7 hours ago

And some Mongols, Romans, etc probably said continually expanding their boarders would give them comparable contemporary heights.

by Mistletoe 7 hours ago

Are you sure you can get to that with transformers?

https://www.lesswrong.com/posts/SkcM4hwgH3AP6iqjs/can-you-ge...

by euphetar 3 hours ago

While I do not have much sympathy for Altman, the article is very low quality and contains zero analysis

Yeah, maybe on the surface chatbots turned out to be chatbots. But you have to be a poor journalist to stop your investigation of the issue at that and conclude AI is no big deal. Nuance, anyone?

by wicndhjfdn 8 hours ago

Our economy runs on market makers AI, Blockchain, whether they are what they seem in the long run is beside the point. They're sole purpose is to generate economic activity. Nobody really cares if they pan out.

by xyst 6 hours ago

Sam Altman is the modern version of a snake oil salesman

by wg0 6 hours ago

At the expense of irking many, I'd like to add Musk to the list if someone didn't already.

From robotics, neurology, transport to everything in between - not a word should be taken as is.

by swiftcoder 3 hours ago

I feel like the time to stop taking Sam Altman at his word was probably when he was shilling for an eyeball-scanning cryptocurrency...

But apparently as a society we like handing multi-billion dollar investments to folks with a proven track record of (not actually shipping) complete bullshit.

by ein0p 3 hours ago

Anyone who takes startup CEOs at their word has never worked in a startup. The plasticity of their ethics is legendary when there’s a prospect of increasing the revenues.

by jacknews 7 hours ago

Does anyone take him at face value anyway?

The other issue is that AI's 'boundless prosperity' is a little like those proposals to bring an asteroid made of gold back to earth. 20m tons, worth $XX trillion at current prices, etc. The point is, the gold price would plummet, at the same time as the asteroid, or well before, and the promised gains would not materialize.

If AI could do everything, we would no longer be able (due to no-one having a job), let alone willing, to pay current prices for the work it would do, and so again, the promised financial gains would not materialize.

Of course in both cases, there could be actual societal benefits - abundant gold, and abundant AI, but they don't translate directly to 'prosperity' IMHO.

by thwg 8 hours ago

TSMC has stopped way earlier.

by luxuryballs 4 hours ago

I wonder how many governments and A-listers are investing heavily in the rapid development of commodity AI video that is indistinguishable from real video. Does it seem paranoid?

They would at least be more believable if they blast claims that a certain video must be fake, especially with how absurd and shocking it is.

by throwaway918299 6 hours ago

I’m just expecting a Microsoft acquisition and Altman exits and moves on to his next grift.

by breck 5 hours ago

In 2017 Daniel Gross, a YC partner at the time, recruited me to join the YC software team. Sam Altman was president of YC at this time.

During my interview with Jared Friedman, their CTO, I asked him what Sam was trying to create, the greatest investment firm of all time surpassing Berkshire Hathway, or the greatest tech company surpassing Google? Without hesitation, Jared said Google. Sam wanted to surpass Google. (He did it with his other company, OpenAI, and not YC, but he did it nonetheless)

This morning I tried Googling something and the results sucked compared to what ChatGPT gave me.

Google still creates a ton of value (YouTube, Gmail, etc), but he has surpassed Google in terms of cutting edge tech.

by tightbookkeeper 7 hours ago

Journalists smell blood in the water. When times were looking better they gave him uncritical praise.

by klabb3 8 hours ago

> Altman expects that his technology will fix the climate, help humankind establish space colonies, and discover all of physics.

Yes. We've been through this again and again. Technology does not follow potential. It follows incentive. (Also, “all of physics”? Wtf is he smoking?)

> It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.

I mean, everything good in life uses energy, that’s not AIs fault per se. However, we should absolutely evaluate tech anchored in the present, not the future. Especially with something we understand so poorly like emergent properties of AI. Even when there’s an expectation of rapid changes, the present is a much better proxy than yet-another sociopath with a god-complex whose job is to be a hype-man. Everyone’s predictions are garbage. At least the present is real.

by photochemsyn 8 hours ago

Of course no corporate executive can be taken at their word, unless that word is connected to a legally binding contract, and even then, the executive may try to break the terms of the contract, and may have political leverage over the court system which would bias the result of any effort to bring them to account.

This is not unusual - politicians cannot be taken at their word, government bureaucrats cannot be taken at their word, and corporate media propagandists cannot be taken at their word.

The fact that the vast majority of human beings will fabricate, dissemble, lie, scheme, manipulate etc. if they see a real personal advantage from doing so is the entire reason the whole field of legally binding contract law was developed.

by fnordpiglet 8 hours ago

While I agree anyone taking sam Altman at his word is and always was a fool, this opinion piece by a journalism major at a journalism school giving his jaded view of technology is the tired trope that is obsessed with the fact reality in the present is always reality in the present. The fact I drive a car that’s largely - if not entirely - autonomous in highly complex situations, is fueled by electricity alone, using a super computer to play music from an almost complete back catalog of everything released at my voices command, on my way to my final cancer treatment for a cancer that ten years ago was almost always fatal, while above me constellations of satellites cooperate via lasers to provide global high speed wireless internet being deployed by dozens upon dozens of private rocket launches as we prepare the final stretch towards interplanetary spaceships, over which computers can converse in true natural language with clear human voices with natural intonation…. Well. Sorry, I don’t have to listen to Sam Altman to see we live in a magical era of science fiction.

The most laughable part of the article is where they point at the fact that in the past TWO YEARS we haven’t gone from “OMG we’ve achieved near perfect NLP” to “Deep thought tell us the answer to life the universe and everything” as some sort of huge failure is patently absurd. If you took Altman at his word on that one, you probably also scanned your eye ball for fake money. The truth though is that the rate of change in the products his company is making is still breath taking - the text to speech tech in the latest advanced voice release (recognizing it’s not actually text to speech but something profoundly cooler, but that’s lost on journalism majors teaching journalism majors like the author) puts to shame the last 30 years of TTS. This alone would have been enough to have a fairly significant enterprise selling IVR and other software.

When did we go from enthralled by the rate of progress to bored that it’s not fast enough? That what we dream and what we achieve aren’t always 1:1 but that’s still amazing? I get that when we put down the devices and switch off the noise we are still bags of mostly water, our back hurts, we aren’t as popular as we wish we were, our hair is receding, maybe we need invisiline but flossing that tooth every day is easier and cheaper, and all the other shit that makes life much less glamorous than they sold us in the dot com boom, or nanotech, etc, as they call out in the article.

But the dot com boom did succeed. When I started at early Netscape no one used the internet. We spun the stories of the future this article bemoans to our advantage. And it was messier than the stories in the end. But now -everyone- uses the internet for everything. Nanotechnology permeates industry, science, tech, and our every day life. But the thing about amazing tech that sounds so dazzling when it’s new is -it blends into the background- if it truly is that amazingly useful. That’s not a problem with the vision of the future. It’s the fact that the present will never stop being the present and will never feel like some illusory gauzy vision you thought it might be. But you still use dot coms (this journalism major assessment of tech was published on a dot com and we are responding on a dot com) and still live in a world powered by nanotechnology, and AI promised in TWO YEARS is still mind boggling to anyone who is thinking clearly about what the goal posts for NLP and AI were five years ago.

by nottorp 8 hours ago

The time to stop taking him seriously was when he started his "fear AI, give me the monopoly" campaign.

by whamlastxmas 5 hours ago

It’s really sad to see all the personal attacks and cynicism that has no basis in reality. OpenAI has an amazing product and was first to market with something game changing for billions of people. Calling him a fraud and scammer is super ridiculous

by xbar 4 hours ago

I do not know about any of that or your feelings of sadness.

I have skepticism of his predictions, and disregard for his exaggerations.

I have a ChatGPT subscription and build features on OpenAI technology.

by meroes 5 hours ago

Game changing for billions? Please give an example. I’m fine to be proven wrong but I don’t know what you could mean.

by EchoReflection 3 hours ago

according to the book "The Sociopath Next Door", approximately 1 in 25 Americans is a "sociopath" who "does not feel shame, guilt, or remorse, and can do anything at all without what "normal" people think of as an internal voice labeling things as "right" or "wrong". It makes sense to me that sociopaths would be over-represented among C-level executives and "high performers" in every field.

https://www.betterworldbooks.com/product/detail/the-sociopat...

by slenk 5 hours ago

Same needs to happen to Elon Musk

by m3kw9 7 hours ago

You all just sit back and not pick at every word he says, just sit calmly and let him cook. And he’s been cooking

by richrichie 7 hours ago

He seems like any other tech “evangelical” to me.

by dmitrygr 6 hours ago

Someone took that grifter at his word ever? Haha! Wait you’re serious? Let me laugh even harder. Hahahaha

by GolfPopper 6 hours ago

Apparently a whole lot of people think the guy who's running a <checks notes> for-proft eyeball-scanning cryptocurrency, "for the benefit of humanity" is very serious when he says his non-profit (but also for-profit) AI company is also "for the benefit of humanity".

by krick 5 hours ago

Since it's paywalled, I assume we are discussing the title (as usual). It implies that there was the time when [we] took him at his word. Uh, ok, maybe. But what does it matter? "We" aren't the people at VCs who fund it, I suppose? So, what does it matter if "we" take him at his word? Hell, even if it suddenly went public, it still wouldn't mean much if we trust the guy or not, because we could buy shares for the same reason we buy crypto or TSLA shares.

As a matter of fact, I suspect the author of the article actually belongs to gullible minority who ever took Altman at his word, and now is telling everyone what they already knew. But so what? What are we even discussing? Nobody calls to remove their OpenAI (or, in fact, Anthropic, or whatever) account, as long as we find it useful for something, I suppose. It just makes no difference at all if that writer or his readers take Altman at his word, their opinions have no real effect on the situation, it seems. They are merely observers.

by 7e 4 hours ago

I mean, there is a reason the board tried their best to exorcise Sam Altman him from the company. OpenAI could be the next Loopt.

by nomilk 8 hours ago

tl;dr author complains that Sam's predictions of the future of AI are inflated (but doesn't offer any of his own), and complains that AI tools that surprised us last year look mundane now.

The article is written to appeal to people who want to feel clever casually slagging off and dismissing tech.

> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.

What a pathetic observation. Does the author not recall how bad chatbots were pre-LLMs?

What LLMs can do blows my mind daily. There might be some insufferable hype atm, but gees, the math and engineering behind LLMs is incredible, and it's not done yet - they're still improving from more compute alone, not even factoring in architecture discoveries and innovations!

by raincole 7 hours ago

> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.

This is such a ridiculous sentence.

GPT-4 now looks like any other chatbot because the technology advanced so the other chatbots are smarter now as well. Somehow the author is trying to twist this as a bad thing.

by asadotzler 6 hours ago

If everyone can catch up so easily, OAI has no moat and SamA is further full of shit asserting that they do.

by cyanydeez 9 hours ago

Better headline : It's too late to stop taking Sam Altman at his word

See same with Elon Musk.

Money turns genius to smooth brained egomaniacal idiots. See same with Steve Jobs

by surgical_fire 8 hours ago

They were never geniuses. They were just rich assholes propped up by other rich assholes.

"It's too late to stop conflating wealth with intelligence"

by golergka 8 hours ago

Regardless of personal qualities, for some reason these people have achieved great things for themselves and humanity, whereas countless competitors, including many other rich assholes, have not.

by lor_louis 8 hours ago

Luck and money.

by golergka 5 hours ago

A lot of people have money, and luck can only explain a single case.

by surgical_fire 7 hours ago

> achieved great things for themselves and humanity

For themselves? Absolutely.

For humanity? Perhaps we have wildly different ideas of what is good for humanity.

by dimgl 8 hours ago

I'm not sure this is the same situation... SpaceX just began a mission to save stranded scientists in space. And Starlink has legitimate uses.

by OKRainbowKid 7 hours ago

That doesn't make Musk any less of an egomaniacal idiot in my eyes.

by jijijijij 7 hours ago

How could they not. The word `wealth` or idea of "money" is completely misleading here. It's cancerous accumulation of resources and influence. They are completely detached from consequential reality. The human brain has not evolved to thrive under conditions of total, unconditional material abundance. People struggle to moderate sugar intake, imagine unlimited access to everything. And it's an inherently amoral existence leading to the necessity of unhinged internal models of the world to justify continuation and reward. Their sense of self-efficacy derailed in zero-g. Listen to them talk about fiction... They literally can't tell the price of a banana, how can they possibly get any meaningful story told? All that is left is the aesthetics and mechanical exterior of narration. How can there be love or friendship with normal people grounding you? You could make everyone you ever met during your lifetime a millionaire, while effectively changing nothing for yourself. Nobody can be this rich and not lose touch with common shared reality.

Billionaires are shameful for the collective, they should be shameful to everyone of us. They are fundamentally most unfit for leadership. They are evidence of civilizatory failure, the least we can do is not idolize them.

by api 8 hours ago

Money removes social feedback. You end up surrounded with bobble heads telling you how genius you are… because they want your money. This is terrible for human psychology. It’s almost like a kind of solitary confinement — solitary in the sense that you are utterly deprived of meaningful rich human contact.

by aomix 8 hours ago

I think about this comparison sometimes https://x.com/Merman_Melville/status/1088527693757349888?lan...

"Being a billionaire must be insane. You can buy new teeth, new skin. All your chairs cost 20,000 dollars and weigh 2,000 pounds. Your life is just a series of your own preferences. In terms of cognitive impairment it's probably like being kicked in the head by a horse every day"

Solitary confinement is a great comparison. But also not existing in the same reality is 99.99% of the population must really warp you too.

by yownie 8 hours ago

it's this more than anything else I wish the general public would understand, those same bobble heads that surround celebrities and eventually warp all sense of costs of common everyday items. We often only see the end result of this when so-and-so celebrity declares bankruptcy and the masses cheer.

In reality they've been vampire sucked dry by close family / friends / salesmen for years and didn't know it.

by throwaway42668 8 hours ago

Sam Altman is not a hapless victim at the mercy of the isolating effects of his financial success.

He was an opportunistic, amoral sociopath before he was rich, and the system he reaps advantage from strongly selects for hucksters of that particular ilk more than anything else.

He's just another Kalanick, Neumann, Holmes or Bankman-Fried.

by imjonse 8 hours ago

Are you seriously suggesting Altman is a genius? Or Musk for that matter?

by throwgfgfd25 7 hours ago

Jobs was a sort of cracked genius and a very imperfect human who wanted to be a better human. Money didn't make him worse, or better. It didn't really change him at all on a personal level. It didn't even make him more confident, because he was always that. Look back through anecdotes about him in his life and he's just the same guy, all the time.

Even the stories I heard about him from one of his indirect reports back in the pre-iCEO "Apple is still fucked, NeXT is a distracted mess" era were just like stories told about him from the dawn of Apple and in the iPhone era.

Musk and Altman are opportunists. Musk appears to be a maligant narcissist. Neither seem in a rush to be better humans.

by DemocracyFTW2 9 hours ago

> Last week, CEO Sam Altman published an online manifesto titled “The Intelligence Age.” In it, he declares that the AI revolution is on the verge of unleashing boundless prosperity and radically improving human life.

/s

by kopirgan 7 hours ago

I generally don't take anyone other than Leon at his word. \s

by bediger4000 9 hours ago

This seems like a more general problem with journalistic practices. Journalists don't want to inject their own judgements into articles, which is admirable, and makes sense. So they quote people exactly. Quoting exactly means that bad actors can inject falsehoods into articles.

I don't have any suggestions on how to solve this. Everything I can think of has immediate large flaws.

by MailleQuiMaille 9 hours ago

>Journalists don't want to inject their own judgements into articles, which is admirable, and makes sense.

Is it even possible ? Like, don't you know the political inclination of any website/journal you read ? I feel like this search of "The Objective Truth" is just a chimera. I'd rather articles combine pros and cons of everything they discuss tbh

by Moto7451 8 hours ago

There’s a difference between having natural human biases you try to avoid when reporting by using the usual context sentence (where, when, to whom something was stated), quote, appositive denoting speaker, quote format and writing “this guy is full of crap” or “you really need to believe this person” while cherry picking statements.

You can easily find examples of each. Both NYT and Slate are considered left leaning and at the same time have been the professional stomping grounds of right leaning writers that started their own media companies that are not left leaning. Everyone has a bias and they don’t have to work somewhere with that same bias, especially if you just stick to the paper’s style guide. On the same substance the two media outlets present the same topic very differently. Sometimes I appreciate the Slate format for the author’s candor and view being injected (like being pointed on Malcom Gladwell). Sometimes I just want to know the facts as clearly stated as possible (I don’t care if the author doesn’t believe in climate change, tell me what happened when North Carolina flooded).

by smogcutter 8 hours ago

Yes, you’ve rediscovered the curriculum of a journalism 101 class.

by Aeolun 8 hours ago

So are you saying there are a lot of journalists that never studied, or did they just never pay attention in class?

Because articles that actually do that are few and far between.

by Apreche 8 hours ago

It’s possible to insert a few sentences factually accounting for a person’s character without inserting a subjective judgement of character.

For example you could say:

Joey JoeJoe, billionaire CEO, who notably said horrible things, was convicted of some crimes, and ate three babies, was quoted as saying “machine learning is just so awesome”.

There, you didn’t inject a judgement. You accurately quoted the subject. You gave the reader enough contextual information about the person so they know how much to trust or not-trust the quote.

by VonGallifrey 5 hours ago

> who notably said horrible things

How do you objectively decide which statements are horrible and which aren't?

The other stuff you listed are facts, but this one would be subjective. That isn't just providing contextual information, but adding personal bias into the reporting.

by cogman10 8 hours ago

This does often happen (depending on the leaning of the newspaper, it's omitted if the figure is someone supported and emphasized otherwise).

A major problem, though, is headlines don't and can't carry this context. And those are the things most people read.

The best you'll get is "Joey JoeJoe says machine learning is just so awesome" or at best "Joey JoeJoe comments on ML. The 3rd word will blow you away!".

by afavour 8 hours ago

Extreme examples are easy. But you can pick and choose which facts to present to the reader to affect the judgement they’re making. It would be trivially easy to paint Bill Gates as either a legendary humanitarian or a ruthless capitalist egotist to someone that’s never heard of him.

by secondcoming 8 hours ago

"Nelson Mandela, convicted of some crimes, calls for World Peace"

by PKop 8 hours ago

Or more likely, the journalists don't know any better and believe the AI hype sold to them and promote it of their own accord.

by Dalewyn 9 hours ago

A journalist's job is to journal something, exactly like how NTFS keeps a journal of what happens.

A journalist doing anything other than journaling is not a journalist.

So people getting quoted verbatim is perfectly fine. If the quoted turns out to be a liar, that's just part of the journal.

by bee_rider 8 hours ago

I don’t think that’s right. First off, we don’t generally define jobs based on the closest computer analogy (we would be unhappy if the loggers returned with a list of things that happened in the woods, rather than a bunch of wood).

The journalist’s job is to describe what actually is happening, and to provide enough context for readers to understand it. Some bias will inevitably creep in, because they can’t possibly describe every event that has ever happened to their subject. But for example if they are interviewing somebody who usually lies, it would be more accurate to at least include a small note about that.

by Dalewyn 8 hours ago

>The journalist’s job is to describe what actually is happening, and to provide enough context for readers to understand it.

The former is a journalist's job, the latter is the reader's concern and not the journalist.

One of the reasons I consider journalism a cancer upon humanity is because journalists can't just write down "it is 35 degrees celsius today at 2pm", but rather "you won't believe how hot it is".

Just journal down what the hell happens literally and plainly, we as readers can and should figure out the rest. NTFS doesn't interject opinions and clickbait into its journal, and neither should proper journalists.

by bee_rider 8 hours ago

The journalist is making a product for the reader in the best case, their job is to help the reader. The second example you mention is a typical example of clickbait journalism, where the journalist has betrayed the reader and is trying to steal their attention, because they actually serve advertisers.

But the first example is not very useful either. That journalist could be replaced by a fully automated thermometer. Or weather stations with an API. Context is useful: “It is 35 degrees Celsius, and we’re predicting that it will stay sunny all day” will help you plan your day. “It is 35 degrees Celsius today, finishing off an unseasonably warm September” could provide a little info about the overall trend in the weather this year.

I don’t see any particular reason that journalists should follow your definition, which you seem to have just… made up?

by Dalewyn 8 hours ago

>your definition, which you seem to have just… made up?

See: https://www.merriam-webster.com/dictionary/journal

Specifically noun, senses 2B through 2F.

I expect journalists to record journals and nothing more nor nothing less, not editorials or opinion pieces which are written by authors or columnists or whatever.

by bee_rider 8 hours ago

Dictionary similarity is not how people get their job descriptions. If you want to just pick a similar word from the dictionary, why are journalists sharing this stuff? Journals are typically private, after all. If someone read your journal, you might be annoyed, right?

Or, from your definition, apparently:

> the part of a rotating shaft, axle, roll, or spindle that turns in a bearing

I don’t think these journalists rotate much at all!

A better definition is one of… journalism.

https://www.britannica.com/topic/journalism

journalism, the collection, preparation, and distribution of news and related commentary and feature materials through such print and electronic media as […]

That said, I don’t think an argument from definition is all that good anyway. These definitions are descriptive, not prescriptive. Journalism is a profession, they do what they do for the public good. If you think that it would be better for the field of journalism to produce a contextless log of events, defend that idea in and of itself, rather than leaning on some definition.

by vundercind 7 hours ago

Why favor a definition of “journalist” that approximately nobody else uses? It seems like it would just make it hard to communicate.

by tiznow 8 hours ago

I think you might need a chill pill, I've never met a single journalist or editor who would let "you won't believe how hot it is" pass in more than a tweet.

by Dalewyn 8 hours ago

As a counterexample, I have deep respect for weather forecasters because they are professionally and legally bound to state nothing but the scientific journal at hand.

"Typhoon 14 located 500km south of Tokyo, Japan with a pressure of 960hPa and moving north-northeast at a speed of 30km/h is expected to traverse so-and-so estimated course of travel at 6pm tomorrow."

"Let's go over to Arizona. It's currently 105F in Tuscon, 102F in Yuma, ..."

Brutally to the point, the readers are left to process that information as appropriate.

Journalists do not do this, and they should if they claim to be journalists.

by tiznow 8 hours ago

>they are professionally and legally bound to state nothing but the scientific journal at hand

In America, just about every meteorologist editorializes the weather to a degree. There's nothing scientific about telling me "it's a great night for baseball" (great for the fans? Pitchers? Hitters?) or "don't wash your car just yet" but I will never stop hearing those. I don't, and the public doesn't seem to think that infringes on journalistic standards, because the information is still presented. Maybe this is different than what you mean -- if you're talking about a situation where journalists intentionally created the full context and pushed the information to the side, obviously that is undesirable.

I will add that weather as a "news product" actually gains quite a fair bit from presenter opinion, and news is a product above all.

by ks2048 8 hours ago

You're describing a microphone / voice recorder, not a journalist.

There are of course places you can go to get raw weather data, but a journalist might put it in context of what else is going on, interview farmers or climatologists about the situation, etc.

There are lots of kinds of journalism, but maybe most important is investigative journalism. They are literally doing an investigation - reading source material, actively seeking out the right people to interview and asking them right questions, following the leads to more information.

by soared 8 hours ago

Thats.. not what journalism means. I don’t know where you got that definition, but I can’t find anything similar. Processing and displaying information is a huge part of journalism - ie assessing what is truth or fiction and communicating each as such. Wikipedia: > A journalist is a person who gathers information in the form of text, audio or pictures, processes it into a newsworthy form and disseminates it to the public. This is called journalism.

by zmgsabst 8 hours ago

You’re injecting that “ie” — Wikipedia doesn’t say it as such.

They’re describing collating and you’re describing evaluating.

by Dalewyn 8 hours ago

And to add, evaluating is the responsibility of the reader.

If you're also tasking "journalists" to evaluate for you, you aren't a reader and they aren't journalists. You're just a dumb terminal getting programs (others' opinions) installed and they are influencers.

by i80and 8 hours ago

That sounds more like stenography than anything else

by llamaimperative 8 hours ago

You’re describing a PR representative. Simply the decision of what to cover is inherently selective and driven by an individual’s and a culture’s priorities.

by bediger4000 8 hours ago

> A journalist's job is to journal something, exactly like how NTFS keeps a journal of what happens.

Your choice of metaphor points out problems with your definition. Avid Linux users will be immediately biased against what you wrote, true though it may be, because you assumed that NTFS is the predominant, or even good example of journaling file systems.

by booleandilemma 8 hours ago

Sounds like you would be a bad journalist.

by twelve40 8 hours ago

well the good news is all that stuff comes with an expiration date, after which we will know if this is our new destiny or yet another cloud of smoke.

This is a good reminder:

> Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models (LLMs) so that humanity would have time to address the social consequences of the impending revolution

In 2024, ChatGPT is a weird toy, my barber demands paper cash only (no bitcoin or credit cards or any of that phone nonsense, this is Silicon Valley), I have to stand in line at USPS and DMV with mindless paper-shuffling human robots, marveling at humiliating stupidity of manual jobs, robotaxis are still almost here, just around the corner, as always. Let's check again in a "coupe of thousand days" i guess!

by namaria 8 hours ago

I've said this before, at the root of all these technological promises lies a perpetual motion machine. They're all selling the reversal of thermodynamics.

Any system complex enough to be useful has to be embedded in an ever more complex system. The age of mobile phone internet rests on the shoulders of an immense and enormously complex supply chain.

LLMs are capturing low entropy from data online and distilling it for you while producing a shitton of entropy on the backend. All the water and energy dissipated at data centers, all the supply chains involved in building GPUs at the rate we are building. There will be no magical moment when it's gonna yield more low entropy than what we put in on the other side as training data, electricity and clean water.

When companies sell ideas like 'AGI' or 'self driving cars' they are essentially promising you can do away with the complexity surrounding a complex solution. They are promising they can deliver low entropy on a tap without paying for it in increased entropy elsewhere. It's physically impossible.

You want human intelligence to do work, you need to deal with all the complexities of psychology, economics and politics. You want complex machines to do autonomous work, you need an army of people behind it. What AGI promises is, you can replace the army of people with another more complex machine. It's a big bald faced lie. You can't do away with the complexity. Someone will have to handle it.

by ben_w 8 hours ago

> It's physically impossible

Your brain is proof to the contrary. AGI means different things to everyone, but a human brain definitely counts as "general intelligence", that implemented in silicon is enough to get basically all the things promised by AGI: if that's done at the 20 watts per brain that biology manages, then all of humanity can be simulated within the power envelope of the USA electrical grid… three times over.

by nicomeemes 7 hours ago

You're lazily mixing metaphors here. This is the problem in all such discussions, it often gets reduced to some combination of hand waving and hype training. "AGI" means different things to everyone, okay? Then it's a meaningless term. It's like saying hey with a quantum computer of enormous size, we could simulate molecular interactions at a level impossible with current technology. I would love for us to be able to do that- but where is the evidence it is even possible?

by ben_w 7 hours ago

There's no metaphor here, I meant literally doing those things.

I followed up the point about AGI meaning different things by giving a common and sufficient standard of reference.

Your brain is evidence that it's "even possible".

by namaria 5 hours ago

> Your brain is evidence that it's "even possible".

All your brain proves is that a universe can produce planetary ecosystems capable of supporting human civilizations made of very efficient brain carrying mammals.

It definitely doesn't prove that these mammals can create boxes capable 'solve physics, poverty and global warming' if we just give Sam Altman enough electricity and chips. Or dollars to that effect.

by ben_w 5 hours ago

If that's what you meant, then I agree with you.

What's the quote? "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t".

Even though it doesn't need to be a single human doing all of it, our brains as existence proofs of the physical possibility, not of our own understanding.

by namaria 8 hours ago

> a human brain definitely counts as "general intelligence", that implemented in silicon is enough to get basically all the things promised by AGI: if that's done at the 20 watts per brain that biology manages, then all of humanity can be simulated within the power envelope of the USA electrical grid… three times over.

So far the only thing that has been proven is we can get low entropy from all the low entropy we've published on the internet. Will it get to a point where models can give us more low entropy than what is present in the training data? Categorically: no.

by ben_w 7 hours ago

You are using "entropy" in a way I do not recognise.

Whatever you mean, our brains prove it's possible to have a system that uses 20 watts to demonstrate human-level intelligence.

by namaria 6 hours ago

Whatever you mean, it took 13 billion years and the whole universe as far as we can tell to create the 20 watt human intelligence brain. And it doesn't take 20 watt of energy to maintain humans, it takes a whole ecosystem capable of sustaining a human population. And for us to operate at our current level of information processing it takes a whole planetary civilization. So no you haven't proved anything by saying that a human brain consumes only 20 watts of energy. We spread an awful lot of entropy around that has to be kept away from our delicate biological and social systems.

You're positing a way to create human intelligence-like in a bottle, that's the same as speculating about the shape of a reality where we have FTL travel or teleportation or whatever else you fancy.

If we're talking about what current ML/AI can do, they can extract patterns from training data and than apply those patterns to other inputs. This can give us great automation, but it won't give us anything better than the training data, solve physics, global warming, poverty or give us human intelligence in a chip.

Whatever quantity Q of entropy in the training data, the total output will be more Q all accounted for. That's true for humans and machines. No shape of possible AGI will give us any output with less Q than the combination of inputs had. The dream that a machine will solve all problems that humanity can't hinges on negating that, which goes against thermodynamics.

As it stands, the planet cannot cope with all the entropy we're spreading around. It will eventually collapse civilization/the ecosystem, whatever buckles first, from the excess entropy. Because global warming, poverty, or ignorance is just entropy. Disorder. Things not being just so as we need them to be.

by ben_w 4 hours ago

Unless I'm mistaken, you've added these two paragraphs since my last comment:

> Whatever quantity Q of entropy in the training data, the total output will be more Q all accounted for. That's true for humans and machines. No shape of possible AGI will give us any output with less Q than the combination of inputs had. The dream that a machine will solve all problems that humanity can't hinges on negating that, which goes against thermodynamics.

Thermodynamics applies to closed systems, which the earth in isolation isn't.

This is why the source material for Shakespeare didn't already exist on Earth in the Cretaceous.

It's also why we've solved loads of problems we used to have, like bubonic plague and long distance communication.

> As it stands, the planet cannot cope with all the entropy we're spreading around.

We're many orders of magnitude away from entropic limits. Global warming is due to the impact on how much more sunlight keeps bouncing around inside the atmosphere, not the direct thermal effect of our power plants.

> It will eventually collapse civilization/the ecosystem, whatever buckles first, from the excess entropy. Because global warming, poverty, or ignorance is just entropy. Disorder. Things not being just so as we need them to be.

Entropy isn't everyday disorder, it's a specific relationship of microstates and macrostates, and you can't usefully infer things when you switch uses.

Poverty is much reduced compared to the historical condition. So is ignorance: we have to specialise these days because there's too much knowledge for any one human to learn.

Entropic collapse is indeed inevitable, but that inevitably is on the scale of 10^18 years or more with only engineering challenges rather than novel scientific breakthroughs (the latter would plausibly increase that to 10^106 if someone can figure out how to use Hawking radiation, but I don't want to divert into why I think that's more than merely an engineering challenge).

by ben_w 5 hours ago

> Whatever you mean, it took 13 billion years and the whole universe as far as we can tell to create the 20 watt human intelligence brain.

It's grossly unreasonable to include the entire history of the universe given the Earth only formed about 4 billion years ago; and given that evolution wasn't even aiming for intelligence, even starting from our common ancestors being small rodents 65 million years ago is wildly overstating the effort required — even the evolution of primates is too far back without intentional selective breeding.

> You're positing a way to create human intelligence-like in a bottle, that's the same as speculating about the shape of a reality where we have FTL travel or teleportation or whatever else you fancy.

FTL may well well be impossible.

If you seriously think human intelligence is impossible, then you also think you don't exist: You, yourself, are a human like intelligence in a bottle. The bottle being your skull.

> This can give us great automation, but it won't solve physics, global warming, poverty or give us human intelligence in a chip.

AI has already been in widesprad use in physics for a while now, well before the current zeitgeist of LLMs.

There's a Y Combinator startup: "Charge Robotics is building robots that automate the most labor-intensive parts of solar construction".

Poverty has many causes, some of which are already being reduced or resolved by existing systems — and that's been the case since one of the ancestor companies of IBM, Tabulating Machine Company, was doing punched cards for the US census.

As for human intelligence on a chip? Well, (1) it's been quite a long time since humans were capable of designing the circuits manually, given the feature size is now at the level where quantum mechanics must be accounted for or you get surprise tunnelling between gates; and (2) one of the things being automated is feature segmentation and labelling of neural micrographs, i.e. literally brain scanning: https://pubmed.ncbi.nlm.nih.gov/32619485/

by namaria 5 hours ago

I never said human intelligence is impossible.

All I am saying is that Sam Altman's promises (remember the original topic) hinge on breaking thermodynamics.

Humans evolved on a planet that was just so. The elements on Earth that make life possible couldn't have been created in a younger universe. So it take the full history of the universe to produce human intelligence. It also doesn't exist in a vacuum. Current human civilization is not a collection of 8 billion 20 watt boxes.

> As for human intelligence on a chip? Well, (1) it's been quite a long time since humans were capable of designing the circuits manually, given the feature size is now at the level where quantum mechanics must be accounted for or you get surprise tunnelling between gates; and (2) one of the things being automated is feature segmentation and labelling of neural micrographs, i.e. literally brain scanning: https://pubmed.ncbi.nlm.nih.gov/32619485/

I don't understand any of this so I won't comment.

by ben_w 5 hours ago

> All I am saying is that Sam Altman's promises (remember the original topic) hinge on breaking thermodynamics

If it hinged on that, then you would actually be saying it's impossible.

> The elements on Earth that make life possible couldn't have been created in a younger universe

Those statements also apply to the silicon and doping agents used in chip manufacture. They tell you nothing of relevance, we're not doing Carl Sagan's Apple Pie from scratch with AI, we're trying to get a thing like us.

by swiftcoder 3 hours ago

> If it hinged on that, then you would actually be saying it's impossible

"impossible with the approach OpenAI is using" != "impossible"

by simonw 8 hours ago

“robotaxis are still almost here, just around the corner, as always”

We have them in San Francisco now (and Los Angeles and Phoenix, and Austin soon.)

by Zigurd 6 hours ago

Waymo is at the threshold of how speech recognition reached a tipping point through decades of grinding effort. I worked on speech recognition applications in the '80s when the best of it was just barely usable for carefully crafted use cases. Now taxis can be automated well enough if you have the right sensor suite and an exquisitely mapped environment. Waymo can do this for 0.05% of ride hailing riders, if 28M Ubers per day and 100k Waymos per week are reasonably accurate. This is like when connected speech started working with a good mic in a quiet environment. Now automated multi-speaker transcription with background noise, crosstalk, and whatever mic you got works. Back in the day some serious people were convinced speech recognition on that level would be impossible without solving AGI.

by aithrowawaycomm 8 hours ago

By "robotaxi" I think they meant loosely "personal self driving car," not an automated taxi service.

Waymo's overstated[1] success has let self-driving advocates do an especially pernicious bit of goalpost-shifting. I have been a self-driving skeptic since 2010, but if you had told me in 2010 that in 10-15 years we have robotaxis that were closely overseen by remote operators who can fill in the gaps I would have thought that was much more plausible than fully autonomous vehicles. And the human operators are truly critical, even more so than a skeptic like me assumed: https://www.nytimes.com/interactive/2024/09/03/technology/zo... (sadly the interactive is necessary here and archives don't work, this is a gift link)

I still think fully autonomous vehicles on standard roads is 50+ years out. The argument was always that ~95% of driving is addressable by deep learning but the remaining ~5% involves difficult problem-solving that cannot be solved by data because the data does not exist. It will require human oversight or an AI architecture which is capable of deterministic reasoning (not transformers), say at least at the level of a lizard. Since we have no clue how to make an AI as smart as a lizard, that 5% problem remains utterly intractable.

[1] I have complained for years that Waymo's statisticians are comparing their cars to all human drivers when they should be comparing it to lawful human drivers whose vehicles are well-maintained. Tesla FSD proves that self-driving companies will respond to consumer demand for vehicles that speed and run red lights.

by ben_w 7 hours ago

While I broadly agree that AI metrics have a "lies, damn lies, and statistics" problem that makes it hard to even agree how competent they are, if someone says "robotaxi" then I have no reason to expect they mean anything more nor less than "a taxi with no human driver".

I would be shocked if we're really 50 years away from that level of AI. 50 years is a long time in computing — late 70s computers were still using punched tape:

https://commons.m.wikimedia.org/wiki/File:NSA_Punch_Verifica...

by aithrowawaycomm 5 hours ago

> , if someone says "robotaxi" then I have no reason to expect they mean anything more nor less than "a taxi with no human driver".

You have three reasons:

1) reading the comment in good faith

2) understanding 'robotaxi' is not a precise technical term

3) safely assuming that most commenters here know about Waymo

There is no reason to choose the most pedantic and smarmily bad-faith reading of the comment.

As for "50 years" - I don't care about electrical engineering, I am talking about intelligence. In the 1970s we had neural networks as smart as nematodes. Today they are as smart as spiders. Maybe in 50 years they will be as smart as bees. I doubt any of our children will live to see a computer as smart as a rat.

by ben_w 5 hours ago

> You have three reasons:

> 1) reading the comment in good faith

> 2) understanding 'robotaxi' is not a precise technical term

> 3) safely assuming that most commenters here know about Waymo

#1 is the main reason why I wouldn't read "robotaxi" as anything other than "taxi robot", closely followed by #2.

> As for "50 years" - I don't care about electrical engineering, I am talking about intelligence.

Neither was I, and you should take #1 as advice for yourself.

> In the 1970s we had neural networks as smart as nematodes. Today they are as smart as spiders. Maybe in 50 years they will be as smart as bees. I doubt any of our children will live to see a computer as smart as a rat.

You're either overestimating the ones in the 70s or underestimating the ones today. By parameter count, GPT-3 is already about as complex as a medium sized rodent. If today's models aren't that smart (definitions of "intelligence" are surprisingly fluid from one person to the next), then you can't reasonably call the ones in the 70s as smart as a nematode either.

by FireBeyond 5 hours ago

> I have complained for years that Waymo's statisticians are comparing their cars to all human drivers when they should be comparing it to lawful human drivers whose vehicles are well-maintained. Tesla FSD proves that self-driving companies will respond to consumer demand for vehicles that speed and run red lights.

Really? Waymo's statisticians are the ones you are complaining about?

Tesla's statisticians have been lying for years, as has Musk when they cite "number of miles driven by FSD in the very small subset of conditions where it is available, and not turned off or unavailable because of where you are, the weather, or any other variable" versus "all drivers, all conditions, all locations, all times" to try to say FSD is safer.

by dagw 8 hours ago

barber demands paper cash only...I have to stand in line at USPS and DMV

Surely this is just a case of the future not being evenly distributed. All of these 'problems' are already solved and the solution is implemented somewhere, just not where you happen to be.

by twelve40 7 hours ago

You're probably right. I'll wait until it gets better distributed. It's just that personally, it's been hard to reconcile the grandiose talk with what's actually around me, that's all.

by ben_w 6 hours ago

The weird distribution was something I experienced when I last visited California in 2018 and couldn't use my card in most places, despite having been to Kenya a few years before that and seeing M-Pesa all over the place.

by iamsrp 7 hours ago

> robotaxis are still almost here, just around the corner, as always.

You can walk to where they're waiting for you.

by m3kw9 7 hours ago

The Atlantic is really going out of their way to hate on Altman. That publication has always been a bit of a whack job of an outfit

by throwgfgfd25 7 hours ago

> That publication has always been a bit of a whack job of an outfit

This is a bizarre take about a 167-year-old, continuously published magazine.

by FireBeyond 5 hours ago

There's quite a number of posts here that seem determined to attack the integrity of The Atlantic... hm. Everything from "their writers are scared that they'll lose their jobs" to complaints about twenty year old articles not panning out correctly.

by throwintothesea 7 hours ago

The techbro whining about as staid a publication as the Atlantic is truly hilarious. They published some of the first pieces in support of abolition, and now you're upset that they're swinging at the richest grifter in Silicon Valley. Get a grip.

by throwgfgfd25 2 hours ago

It also famously published As We May Think.

by m3kw9 2 hours ago

He brought us AI that is about to change almost everything, be grateful for the magic intelligence in the sky

by whoiscroberts 7 hours ago

Any person that thinks “automating human thought” is good for humanity is evil.

by mrangle 7 hours ago

I can't imagine taking The Atlantic seriously on anything. My word. You aren't actually supposed to read the endless ragebait.

Contrary to the Atlantic's almost always intentionally misleading framing, the "dot com boom" did in fact go on to print trillions later and it is still printing them. After what was an ultimately marginal if account clearing dip for many.

I say that as someone who would be deemed to be an Ai pessimist, by many.

But its wildly early to declare anything to be "what it is" and only that, in terms of ultimate benefit. Just like it was and is wild to declare the dot com boom to be over.

by architango 6 hours ago

> I can't imagine taking The Atlantic seriously on anything.

Agreed - I stopped taking The Atlantic seriously after their 2009 cover story, "Did Christianity Cause the Crash?"[1] To ignore CDOs, the Glass-Steagal repeal, the co-option of the ratings agencies and the dissolution of lending standards, and instead blame the Great Recession on a few obnoxious megapastors is to completely discard the magazine's credibility.

[1] https://www.theatlantic.com/magazine/archive/2009/12/did-chr...

by FireBeyond 5 hours ago

Really, you can't see a connection between people promoting ever supernatural rewards for being money and prosperity-driven and stoking that fire right alongside rampant capitalism and reckless decisions on how to protect our financial and economic wellbeing? Especially when even some very very bright people still believe in imaginary creatures in the sky and use that to shape and guide their decision process?

Maybe not "cause", but "contribute notably to".

by architango 5 hours ago

You might argue that faith, in its various manifestations, is the root of all economic distortions and malaises. That being said, no, I don’t think that a subset of evangelical Christianity caused or meaningfully contributed to the 2009 crash, since there are abundant facts and analyses to the contrary. To believe otherwise would be, ironically, an act of faith.

Data from: Hacker News, provided by Hacker News (unofficial) API