How Big Is VMS? (vmssoftware.com)

68 points by rbanffy a day ago

66 comments:

by jamesy0ung a day ago

Is there any reason to use VMS today other than for existing applications that cannot be migrated? I've heard its reliability is legendary, but I've never tried it myself. The 1 year licensed VM seems excessively annoying. Is it just old and esoteric, or does it still have practical use? At least with Linux, multiple vendors release and support distros and it is mainstream, whereas with VMS, you'd be stuck with VSI.

by zie 16 hours ago

In modern times, we have taken the everything breaks all the time, make redundancy and failover cheap/free approach.

VMS(and the hardware it runs on) takes the opposite approach. Keep everything alive forever, even with hardware failures.

So the VMS machines of the day had dual redundant everything, including interconnected memory across machines and SCSI interconnects and everything you could think of was redundant.

VMS clusters could be configured in a hot/hot standby situation, where 2 identical cabinets full of redundant hardware could failover during an instruction and keep going. You can't do that with the modern approach. The documentation was an entire wall of office bookcase almost clear full of books. There was a lot of documentation.

These days, nothing is redundant inside the box level usually, we instead duplicate the boxes and make them cheap sheep, a dime a dozen.

Which approach is better? That's a great question. I'm not aware of any academic exercises on the topic.

All that said, most people don't need decade long uptimes. Even the big clouds don't bother with trying to get to decade long uptimes, as they regularly have outages.

by malux85 10 hours ago

One of the things that blew my mind in my early career was seeing my mentor open the side of a VMS machine (I can’t remember the hardware model sorry) and slide out a giant board of RAM, and then slide in another board the same physical size but it had a CPU on it, and then enable the CPU

The daughter board in that machine could have RAM or CPUs in the same slot and it was changeable without reboots!

by zie 5 hours ago

Exactly! One would never, ever do that with x86.

by davesmylie a day ago

I was actually surprised to see that there's been a release in the last 12 months - I had thought it was dead.

I used it extensively in the late 90's early 00's and really liked it. As a newb sysadmin at the time, the built-in versioning on the fs saved me from more than one self-inflicted fsck up.

I can't imagine there would be any green-field deployments in the last 10 years or so - I'm guessing it's just supporting legacy environments.

by lproven 8 hours ago

> I can't imagine there would be any green-field deployments in the last 10 years or so - I'm guessing it's just supporting legacy environments.

This is not entirely the case.

I have been writing about VMS for years. The first x86-64 edition, version 9, was released in 2020:

https://www.theregister.com/2022/05/10/openvms_92/

Version 9.0 was essentially a test. 9.1 in 2021 was another test and v9.2 in 2022 was production-ready.

There's no new Itanium or Alpha hardware, and version 8.x runs on nothing else. Presumably v9.x is selling well enough to keep the company alive because it's been shipping new versions for a while now.

Totally new greenfield deployments? Probably few. But new installs of the new version, surely, yes, because VMS 9 doesn't run on any legacy kit, so these must be new deployments.

It's been growing for a few years. Maybe not growing much but a major new version and multiple point releases means somebody is buying it and deploying it. Never mind no new deployments in a decade... more new deployments in the last few years than in the previous decade.

by Kon-Peki 18 hours ago

> I had thought it was dead.

HP tried to kill it. Somewhere in the neighborhood of 10 years ago they announced the EOL. This company - VMS Software Inc (VSI) was formed specifically to buy the rights and maintain/port it. So you have an interesting situation.

Old VAX and Alpha systems are supported, supposedly indefinitely, but if you have an Itanium system it has to be newer than a certain age. HP didn’t sell the rights to support the older Itaniums, and no longer issues licenses for them. So there is a VMS hardware age gap. Really old is ok. Really new is ok.

by rbanffy 9 hours ago

It's now ported to x86 as well, so you can probably just order a Dell box and install OpenVMS on it.

by lproven 8 hours ago

HP box. It is a former HP product.

Version 9.x has been out for 5 years, stable for 3, and primarily targets and supports hypervisors. It knows about and directly supports VMware, Hyper-V and KVM.

So, yes, get a generic x86-64 box, bung one of the big 3 hypervisors on it, and bang, you are ready to run VMS 9.

by rbanffy 2 hours ago

I’m bringing up my own on a Lenovo with KVM.

by rbanffy 21 hours ago

MCP and MVS (now called z/OS) are all still supported. Not sure whether MCP still receives updates though.

by skissane 17 hours ago

> Not sure whether MCP still receives updates though.

MCP Release 21 came out in mid-2023, and release 22 is supposed to be out middle of this year, with further releases planned: https://www.unisys.com/siteassets/microsites/clearpath-futur...

Looking at new features, they seem to be mainly around security (code signing, post quantum crypto) and improved support for running in cloud environments (with the physical mainframe CPU replaced by a software emulator)

Unisys’ other mainframe platform, OS 2200 is still around too, and seems to follow a similar release schedule - https://www.unisys.com/siteassets/microsites/clearpath-futur... - although I get the impression there are more MCP sites remaining than OS 2200 sites?

by sillywalk 13 hours ago

Does MCP or OS 2200 have any well known users, or was there a niche that they fill(ed)?

Also, I noted in those two roadmaps that they offered continuity - Clear Path Forward -> "Don't worry about migrating or refactoring your apps", but also stated that "none of these new features are guaranteed to show up, and if that damages your company financially, it's not our fault".

I don't know if this is just a standard legal cop-out

by skissane 13 hours ago

> Does MCP or OS 2200 have any well known users,

I know the Michigan state government uses Unisys MCP (I don’t know for what): https://www.michigan.gov/-/media/Project/Websites/dtmb/Procu...

In 2023, NY State Education Department had an RFP to build a replacement for their Unisys MCP-based grants admin system with a modern non-mainframe solution (don’t know current status of that project): https://www.nysed.gov/sites/default/files/programs/funding-o...

It is generally easier to find out who government users are because they are often required to publish contracts with the mainframe vendor, RFPs for replacement systems or services, etc. (Exception is some national security users where the existence of the system and/or the tech stack it runs on may be classified.) By contrast, private companies, that kind of info is usually only available under NDA - obscure legacy systems is the kind of “dirty laundry” a lot of business don’t want publicly aired

In 2013, it was reported in the media that the Australian retailer Coogans was one of the last (maybe the last?) Unisys mainframe sites in Australia - https://www.smh.com.au/technology/tassie-retailer-rejects-cl... - I don’t know if they kept their mainframe after that or got rid of it, but in 2019 they went out of business - https://www.abc.net.au/news/2019-03-12/hobart-retailer-cooga...

> but also stated that "none of these new features are guaranteed to show up, and if that damages your company financially, it's not our fault".

> I don't know if this is just a standard legal cop-out

I’m pretty sure that’s just the “standard legal cop-out” - lots of vendors put language like that in their roadmaps, to make it harder for customers to sue them if delivery is delayed or if the planned next version ends up being cancelled

by quesomaster9000 18 hours ago

Right, but z/OS is part of a larger longer-running hardware strategy that, with virtualization, serves the needs of mixed-OS workloads and multi-decade tenures overseeing 24/7 systems.

The corpse of OpenVMS on the other hand is being reanimated and tinkered with, presumably paid for by whatever remaining support contracts exist, and also presumably to keep the core engineers occupied with inevitably fruitless busywork while occasionally performing the contractually required on-call technomancy on the few remaining Alpha systems.

VMS is dead... and buried, deep.

It's a shame it can't be open-sourced, just like Netware won't be open-sourced, and probably has less chance of being used for new projects than RiscOS or AmigaOS.

by icedchai 2 hours ago

I also disagree. Porting VMS to x86-64 was a huge endeavor. They wouldn't have bothered unless there were at least a few big customers to make it worth it. Otherwise, why not go with emulation? There are commercially supported Alpha and VAX emulators for x86.

by lproven 8 hours ago

I disagree.

It's in active development. They're putting out new versions and selling licenses.

There are much deader OSes out there than VMS, such as Netware.

I suspect that there are more fresh deployments than there are of Xinuos's catalogue: OpenServer 5, 6, and UnixWare 7.

https://www.xinuos.com/products/

Last updated 2018...

by icedchai 21 hours ago

It's fun for hobbyists! The first multi-user system I used happened to be a VAX/VMS system, so it brings me back to my youth. I have a VAX running in simh, complete with compilers. The release, VMS 7.3, is almost 25 years old.

by kristjank 19 hours ago

There is a pretty good amount of operation-critical stuff like power plants (especially nuclear), radars, traffic management and various finance/insurance/airline services that operate on VMS afaik. Something about very reliable cluster-native operations, or so it would seem. 90's cloud-native.

by vaxman 39 minutes ago

That is the kind of comment that a well run bulletin board would moderate. Then again, there are probably not enough VMS systems people left to really have an r-war (short of architecture war).

VMS' key feature over Unix is consistency and beyond that it is being interrupt driven at all levels (no wasted cycled polling except for code ported over using POSIX interfaces). VMS was killed by a confluence of business issues, not because it was obsolete or inefficient.

by smackeyacky 21 hours ago

I used it a bit at University - most notably it had an Occam system on it that wasn't available on the Sun workstations.

I'm curious about running a VMS system although the admin side looks a bit daunting. The thing I'd really like to do is run X-Windows on an emulator on my home lab, just to see it run.

by rjsw 20 hours ago

I had an Alpha AXP 150 workstation on my desk for a while, it ran X11 applications fine with no source changes required.

by snovymgodym 19 hours ago

> Is there any reason to use VMS today other than for existing applications that cannot be migrated?

No, there is no reason to do a greenfield VMS deployment and hasn't been for a long time.

> I've heard its reliability is legendary, but I've never tried it myself.

I've heard the same things but I am doubtful as to their veracity in a modern context. Those claims sound like they come from an era where VMS was still a cutting-edge and competitive product. I'm sure VMS on vaxclusters had impressive reliability in the 1980s, but I doubt it's anything special today. If you look at the companies and institutions that need performance and high reliability today (e.g. Hyperscaler companies or the TOP500) they are all using the same thing: Linux on clusters of x86-64 machines.

by przemub an hour ago

Hey, something like twenty are not x86-64 based :) With ARM Fugaku at the top a couple years ago.

by 4ad 9 hours ago

Hyperscaler companies or the TOP500 don't need high reliability, especially the latter.

With cloud computing reliability is achieved through software, distributed software which needs to be horizontal.

On a mainframe reliability is achieved through hardware (at least as fast as user software is concerned), and the software is vertical.

If you need to run vertical, single-system image software, the cloud is useless for making it reliable.

Systems built on the cloud are reliable only insofar as people can write reliable distributed systems which assume components will fail. It is not reliable if you can't, or don't want to write software like that (which carries a significant engineering cost).

The real reason to avoid mainframes (and VMS) is vendor lock-in, not technological.

by nickpsecurity 16 hours ago

I think you're half right.

On one hand, I don't see many of the modern services having years to decades of uptime. Clustering is also bolted onto many products while not available for most products. These were normal for OpenVMS deployments. Seems like a safer bet in that regard.

If people have $$$, which VMS requires for such goals, they can hire the type of sysadmins and programmers who can do the same in Nix' systems. The number of components matching VMS's prior advantages increases annually. Also, these are often open source with corresponding advantages for maintenance and extensions.

The other thing I notice is VMS systems appear to be used in constrained ways compared to how cloud companies use Linux. It might be more reliable because users stay on the happy path. Linux apps keep taking risks to innovate. FreeBSD is a nice compromise for people wanting more stability or reliability with commodity hardware.

Then, you have operating systems whose designs far exceed VMS in architectural reliability. INTEGRITY RTOS, QNX, and LynxOS-178B come to mind. People willing to do custom, proprietary systems are safer building on those.

by mixmastamyk 18 hours ago

No. Most of the good stuff was lifted into Windows NT decades ago. The rest has been far surpassed over the same time period by Linux and others. A few cool things probably fell into the cracks, but that's common in the industry.

It's interesting in a "what if/parallel universe" kind of way, but I certainly wouldn't touch it for anything new with that licensing.

by jonstewart 21 hours ago

I had a job at a place in college, back 1997–2000, that was run by a big DEC Alpha server running VMS. VMS was dying then.

I was just a lowly kid programmer working on a side project, so I can't tell you whether it's still uniquely good at something to justify its usage today. It worked. But it was weird and arcane (not that Unix isn't, but Unix won) and using it today for a new project would come with a lot of friction.

by cbm-vic-20 21 hours ago

VAX/VMSCluster was like the Kubernetes of the 1980s. Lots of features that appeared in k8s decades later were baked into VMS.

https://en.wikipedia.org/wiki/VMScluster

by markus_zhang 19 hours ago

David Cutler is one of my heroes. I think he only worked on the first version of VMS, but his expertise of OS design and implementation is probably second to few.

Does anyone know whether he is still working in Microsoft? What does it feel to work with him?

by vaxman 33 minutes ago

He brought down the empire by committing the ultimate offense: walking across the street and taking the additional money. Hundreds of thousands of people lost their careers, millions more disrupted because of his selfish act. In the 9th Circle will be at least your hero, RMS of GNU and Vint of PPP and Carly of "That Face."

by pjmlp 9 hours ago

At the end of his interview some folks linked to, he points out still being at Microsoft working on a project related to AI looking into using Linux on XBox racks.

by progmetaldev 18 hours ago

His series on YouTube, Dave's Garage, has really helped me to conceptualize Operating Systems in the past that I used, but didn't really fully understand. I started with Atari DOS, but moved to the 286 roughly around the time it started to become popular, along with MS-DOS. I'm 45 now, so didn't think about things as deeply back then, as far as constraints and why things may have been built the way they were. I think his series really does a great job at looking into both technology and company politics (especially his time at Microsoft).

https://youtu.be/xi1Lq79mLeE?feature=shared

by lmz 16 hours ago

That's Dave PLUMMER's channel. Dave CUTLER is the guest in that video.

by progmetaldev 15 hours ago

You are correct, thank you. Dave Cutler has apparently done many guest spots on Dave's Garage, so many that I confused their names! Appreciate the correction, so I can perform some better searches of Dave Cutler, having only looked into the videos on YouTube.

by markus_zhang 16 hours ago

Thanks, I think he did an interview of David Cutler which is pretty interesting.

by progmetaldev 15 hours ago

Yes, lmz <https://news.ycombinator.com/user?id=lmz> pointed out that I had mixed up Dave Plummer and Dave Cutler. There are actually multiple videos that Dave Plummer has done with Dave Cutler, which I found very interesting.

Reminds me of Coders At Work, by Peter Seibel, which I read right around the time that I decided to get deeper into software. Being able to read or hear about the process that went on in someone's head while developing something so major was and is still impressive, and motivating.

by voidfunc 18 hours ago

He's a high level technical fellow as far as I know but he's also 83. I suspect his involvement these days, if any, is purely advisory to a limited group of core NT kernel and RDOS engineers.

by markus_zhang 16 hours ago

Yeah, I agree, it's probably high level or retirement.

by jonstewart 14 hours ago

The last article I read about him implied that he’s intent on working and that working with him was… intense. But that was a few years ago.

I like to imagine there’s an inner sanctum in a secure sub-basement of Microsoft where a couple dozen cracked kernel developers work quietly… except when Dave Cutler asks them to come into his personal lab through the three foot thick blast doors and man-trap so he can yell at them about a bug he found.

by snovymgodym 19 hours ago

I have a morbid curiosity about this system, but I don't really have a charitable view of it.

The story as far as I know, goes like this

Back in the late 1970s Dave Cutler and his team create VMS at DEC as the next generation operating system for DEC's new flagship product, the VAX minicomputer.

VMS is good by all accounts and was a successful product, but Unix goes on to dominate the minicomputer and emerging server market for the next decade.

Then in the 1990s DEC goes out of business and sells VMS to Compaq, but not before porting it to their doomed Alpha CPU architecture.

Then in 2000s Compaq goes out of business and gets acquired by HP, and together they port VMS to the doomed Itanium CPU architecture.

In 2014, a shop called VMS Software Inc (VSI) strikes some kind of deal with HP where they get to develop and support new versions of VMS while older versions continue to belong to HP. As part of this, they finally announce an x86-64 port. This port first sees the light of day in 2020.

----

The story is essentially bad bet after bad bet, missing the boat and fighting the last war over and over again. And today, it's just a piece of legacy software being used to extract the last bits of value from the organizations that are stuck with it.

Even so, I hope for a true open source release of it one day.

by mepian 18 hours ago

How was the Alpha a bad bet? x86-64 didn't exist yet, and the architecture was pretty solid technologically. It died because DEC died, not the other way around.

by icedchai 15 hours ago

Alpha was so far ahead, compared to the other mid 90’s “workstation” vendors. I went to a university with tons of DEC hardware, then worked at a mostly DEC shop for a bit. It’s a shame DEC died.

by LeFantome 15 hours ago

I really loved the Alpha platform. It was not as fast as it felt like it should have been given the clock speed. It also seemed like a real memory pig compared to x86 at the time. That was probably just because it was 64 bit. Everything is a memory pig now I guess. :)

Alpha boxes were cool. High clock speeds, massive amounts of RAM does the time, and huge storage. When they were the only 64 bit systems, they were the only game in town for some workloads.

by sillywalk 2 hours ago

> When they were the only 64 bit systems

They were never the only 64 bit systems. MIPS introduced their 64 bit R4000 in 1991, a year before the Alpha came out. Sun released the 64 bit UltraSPARC in '95, along with IBM's 64bit PowerPC AS for their AS/400 systems. HP released the 64bit version of their PA-RISC in 1996.

by icedchai 2 hours ago

In the late 90's, another engineer came up to me and said they had an Alpha with a couple gigs of RAM. That was almost unheard of for the time! My x86 laptop then had 32 megs. It's funny looking back at that now... Everything is a memory pig.

by brazzy 12 hours ago

> It also seemed like a real memory pig compared to x86 at the time. That was probably just because it was 64 bit. Everything is a memory pig now I guess. :)

Wasn't Alpha also a fairly pure RISC architecture, with larger machine code being an inherent property of that?

by jabl 13 hours ago

> How was the Alpha a bad bet?

Not technically (Alpha ISA had its good and bad sides, but was decent enough), but economically. DEC just didn't have the marketshare and thus economic muscle to survive in a game of ever increasing R&D costs for each successive generation. Hence DEC ending up acquired by Compaq, which then was acquired by HP.

HP also saw the writing on the wall, and developed Itanium with Intel as a replacement for their PA-RISC, thinking that Itanium could benefit from Intel's huge economy of scale in chip manufacturing. And after acquiring Compaq (with DEC Alpha) it sunset the Alpha as well in favor of Itanium, for the same reasons. Well, we all know how the Itanium story turned out.

by nickpsecurity 16 hours ago

More details about how it went down here:

https://www.informationweek.com/it-leadership/compaq-to-aban...

ARM's and POWER's success suggests Alpha might have made it. Compaq wanted to partner on Itanium, though. Eventually, Intel got the Alpha I.P. rights which might as well have been a death sentence.

Last Alpha I saw was the SAFE architecture that added security features to a homebrew CPU that was derived from Alpha ISA. What I liked most on Alpha, though, was PALcode with its atomic execution.

by Spooky23 16 hours ago

Nah. It was a totally different world. These companies were competing around proprietary advantages at the OS or hardware level that became less and less relevant over time. HP self-immolated itself and you were left with IBM and Sun.

Only IBM survived, and that’s because it won key contracts in the 60s and 70s to run verticals and business systems, and essentially leveraged mainframe financing and legacy contracts to cross-sell everything. On the tech side, they parlayed that into a sustainable business by virtualizing everything and sharing the Power platform. They get some new business for AIX, but it’s mostly that legacy business.

A good chunk of DEC’s and Compaq’s Business was running terminal (as in tty) operations for mainframes. That went endangered with NT 3.5 and extinct with NT 4. As Linux improved, Intel was good enough. ARM is doing to Intel what Intel did to everyone.

by lproven 8 hours ago

> The story as far as I know, goes like this

That is not really accurate or representative at all, no.

by jabl 13 hours ago

> In 2014, a shop called VMS Software Inc (VSI) strikes some kind of deal with HP where they get to develop and support new versions of VMS while older versions continue to belong to HP. As part of this, they finally announce an x86-64 port. This port first sees the light of day in 2020.

And interesting factoid about the x86-64 port is that they've switched to LLVM-based compilers rather than making x86-64 backends for their legacy compilers.

by lproven 3 hours ago

Let me try to recap a fairer version...

> Back in the late 1970s Dave Cutler and his team create VMS at DEC as the next generation operating system for DEC's new flagship product, the VAX minicomputer.

Not exactly.

In the '60s DEC made several of the leading minicomputers. One was a 16-bit box, the PDP-11, a critical machine in the history of Unix as the first new platform it was ported to.

(It was written on an 18-bit DEC mini, the PDP-7. Part of the reason that the PDP-11 got big was that the industry was moving to 8-bit bytes and 16-bit words.)

The VAX was the 32-bit extended version of the PDP-11. It added virtual memory: VAX stands for Virtual Address Extension.

Cutler wrote one of the most successful PDP-11 OSes, RSX-11. He was famously much faster than rival teams, so got the job of writing a 32-bit OS for the new 32-bit machine.

> VMS is good by all accounts and was a successful product, but Unix goes on to dominate the minicomputer and emerging server market for the next decade.

Not really. VMS 1.0 was 1977. Its clustering is still the best-of-breed today, able to present multiple heterogenous machines (VAX, Alpha, Itanium and x86-64) as a single virtual host on the network, with multiple nodes sharing drives, with nodes able to join and leave while the cluster stays up. Uptimes in decades are normal with OS upgrades happening in that time.

DEC enjoyed 10-15yr of dominance in its sector before Unix started to become much of a threat. The first SUN workstation wasn't for 5 years yet. The first SPARC not for another 12yr.

> Then in the 1990s DEC goes out of business

Nope nope nope.

Cutler proposes a plan: a successor to the VAX, a 32-bit RISC chip (PRISM), and a successor to VMS, a multi-personality OS (MICA) to run on it.

DEC says no. It does not believe that microcomputers and Unix are threats, and it spends $ _billions_ on VAX 9000, a series of mainframe-class VAX machines. By the time they eventually ship, the performance is uncompetitive.

Mind you while it's doing this it's selling tons of VAXes including high-end workstations; I bought several and ran clusters of them.

Microsoft headhunts Cutler and his core team with him. It gives him OS/2 3.0 to finish, the portable (CPU-independent) version. They built it on Intel's next-gen RISC chip, i860: the x86 times ten. Codename: N-10. The OS is renamed OS/2 NT for N-Ten.

Note: officially denied now, yes, I am fully aware. Don't believe everything you hear.

Cutler implements his planned MICA multi-personality OS, able to emulate other OSes at kernel level, as NT. Most OS/2 stuff is junked but at launch it can format and use OS/2 HPFS disks and run OS/2 text-mode binaries, and an optional add-on to run Presentation Manager GUI apps is available.

DEC rescues PRISM, upgrades it to 64-bit, and calls it Alpha. Fastest CPU in the industry and the first 64-bit single-chip processor. Runs Unix, VMS, and Windows NT. First non-x86 chip to get Linux ported to it.

DEC also uses this experience to design the first superscalar ARM, called StrongARM.

DEC also gets a very sweet deal on NT for Alpha; the rumour is that DEC has proof that Cutler used MICA code in NT and has MS over a barrel.

DEC remains a major industry force. It also makes networking kit, printers, HDD and tape drives, Ethernet chipsets, PCs -- it's almost a one-stop shop. You can built an entire enterprise network entirely from DEC kit from the screens to the keyboards to the Ethernet switches. I did.

> sells VMS to Compaq, but not before porting it to their doomed Alpha CPU architecture.

Way off. Not even close.

DEC's lost MICA project, now called Windows NT, eats into its revenues. It loses market share to cheap x86 PCs and an OS based on a DEC design.

Compaq buys DEC. It's too big to digest and Compaq gets in trouble.

> Then in 2000s Compaq goes out of business and gets acquired by HP, and together they port VMS to the doomed Itanium CPU architecture.

Not really, no.

Cash-rich HP, which has lots of experience with managing non-x86 lines, acquires one of its biggest competitors in the x86 space, which has zero such experience.

HP buys Compaq. HP has its own RISC chip, its own Unix, its own fault-tolerant systems, all kinds of legacy stuff. It is quite used to killing old product lines. It also has a high-end enterprise email server that is compatible with MS Exchange.

HP makes good money from its partnership with MS, though.

So, HP kills HP OpenMail and sells the IP to Samsung, trades Alpha to Intel in return for killing its RISC chip... it goes all-in on MS and being the premium enterprise MS partner. Anything that rivals anything from Intel or Microsoft, HP kills.

HP works with Intel to make an EPIC chip that it tells customers will replace its PA-RISC.

HP merges the surviving DEC enterprise (non-x86) kit into its enterprise lines.

It announces it's killing VMS.

I wrote this: https://www.theregister.com/2013/06/10/openvms_death_notice/

There is a big customer outcry.

> In 2014, a shop called VMS Software Inc (VSI) strikes some kind of deal with HP where they get to develop and support new versions of VMS while older versions continue to belong to HP. As part of this, they finally announce an x86-64 port. This port first sees the light of day in 2020.

HP spins off VMS to a new company.

https://www.theregister.com/2014/07/31/openvms_spared/

As there is no new Alpha or Itanium kit, the new company's proposition is to help customers nurse VMS on Alpha or Itanium until it has an x86-64 VMS.

It delivers this by 2020.

by sillywalk 2 hours ago

> [HP's] own fault-tolerant systems

Which were these? I didn't know HP had a fault-tolerant line.

I know Compaq purchased Tandem Computers with their fault-tolerant NonStop systems, and they intended to port it from MIPS to Alpha.

by uptownfunk 15 hours ago

Wow this is what put food on the table for us as a kid. Respect to VAX / VMS. I wonder if anyone remembers sperry univac digital burrows etc tech co of yore. It gave me a childhood in America, so however old or obsolete it may be I have a fond place for it.

by whartung 16 hours ago

Does it seem right that ACPI alone is almost 9% of the LOC in this code base? At over 165K lines?

Mind, I honestly don't know anything about the details of ACPI.

But, seems like a lot to me.

by p_l 8 hours ago

Remember that there aren't as many drivers in VMS core as in Linux (were just amd DRM driver dwarfs some older kernel versions), and ACPI for all its complexity also handles as portability layer between a lot of differences that in VMS' past involved having to release an entire special release of the system just to get it to boot, now covered by ACPI support.

Also, we don't know how much of that is test code, samples (for testing, for example)

by mrweasel 8 hours ago

I seems to have read somewhere that there are really only three ACPI implementations: Microsofts, Intels (used in Linux and FreeBSD, among others) and OpenBSDs. So I wonder if VMW might be a fourth, or if they are also using the Intel implementation.

by rayiner 16 hours ago

ACPI is a monster. 1,100 page spec: https://uefi.org/sites/default/files/resources/ACPI_Spec_6_5...

There’s a ton of different tables you have to parse and as I recall there’s a whole bytecode you need to be able to execute.

by zabzonk 19 hours ago

Can we be clear? This is VAX VMS?

by icedchai 19 hours ago

Yes! VAX/VMS became OpenVMS back in the 90's. It runs on VAX, Alpha, Itanium, and x86-64.

by ConanRus 20 hours ago

they sits on that legacy IP as if anybody cared, i was in the hobbyist program and they screwed it. I was able to download and setup all their products, and now i cant do even that, the only option is to re-download and re-setup an VM image every 6 month or so.

really "user friendly". and then they're wining that nobody contributes to the opensource for VMS.

by BSDobelix 10 hours ago

>really "user friendly". and then they're wining that nobody contributes to the opensource for VMS.

Yep, one of the first versions of the x86 version one could download everything and it was planed to renew the license once a year. They then canceled the the license after only one year to provide a new image every year (as if i want to reconfigure my system every year).

That's not how you attract dev's or users.

by progmetaldev 18 hours ago

Sounds like a company that has enough high paying legacy clients that they still offer a bit of support, but want that free open source work without improving it to make that work easier to complete.

by BSDobelix 10 hours ago

https://vmssoftware.com/about/news/2024-03-25-community-lice...

>Despite our initial aspirations for robust community engagement, the reality has fallen short of our expectations. The level of participation in activities such as contributing open-source software, creating wiki articles, and providing assistance on forums has not matched the scale of the program.

The "aspiration" lasted for a whole year ;)

by 4ad 9 hours ago

At some point I was contacted to do a VMS port of Go, for the new x86 port (I have written the Go ports for arm64, sparc64, and for Solaris). In the end it all fizzled out because they couldn't arrange for some sort of license for me such that I would actually be able to run VMS.

Data from: Hacker News, provided by Hacker News (unofficial) API