I built a NFS3-over-OpenVPN network for a startup about a decade ago; it worked “okay” for transiting an untrusted internal cloud provider network and even over the internet to other datacenters, but ran into mount issues when the outer tunnels dropped a connection during a write. They ran out of money before it had to scale past a few dozen nodes.
Nowadays I would recommend using NFS4+TLS or Gluster+TLS if you need filesystem semantics. Better still would be a proper S3-style or custom REST API that can handle the particulars of whatever strange problem lead to this architecture.
Publicly available data[1] on the pilot project in Nevada suggests a total of “50MW” generation capacity is planned across 10 rail lines, but the photos on the website seem to only show 1 set being built so far - and a claimed output of 5MW. The per-car mass of 720,000 lb (321 Tonnes) being lowered 229ft=70 Meters (510ft track length x sin(26.8) degrees) in Earth’s 9.81/ms^2 gravity field represents a maximum potential energy of only 220MJ, or 61 kWh per car. Reaching 5MW peak requires a car to be dispatched every 44 seconds. 10 cars would provide about 7.5 minutes of runtime - which matches the advertised 15-minute cycle length.
This all seems reasonable - but is a far cry from the performance of existing Pumped Hydrostorage plants which routinely exceed 1GW since the 1970s, and can run for several hours per cycle. They do require lots of Water and a mountain’s worth of elevation change, which limits the site selection, whereas this system seems to work with any open-pit mine.
It will be interesting to see if this technology can be made competitive with existing grid-stabilization techniques, and what challenges will be encountered along the way.
Pumped hydro should be expanded as part of a national water grid to cope with droughts and floods. NSF studied it and reached positive conclusions many years ago but no one is serious about implementing it.
To be fair, dams can be immensely destructive to ecosystems, with run-off effects that harm everything around them (humans included). My ex worked for on the NGOs that campaign for better dams instead of no dams at all.
The great thing about this gravity storage system is how easy it is to scale. You just need a hill. Sure, it's not going to deliver the power of pumped hydro, but it's easier to build and much safer to operate. And it's certainly a better design than those concrete block tower designs you occasionally see which are just a windy accident waiting to happen
If you have a hill then you can just put a water tank at the top and bottom and a pipe with a pump and a generator in-between. Even if your rolling mass was iron, you would only need a tank 8x larger than your rolling mass in volume (2x per dimension) to be equal in storage. Much easier to build and safer than a 300 ton railcar barreling down a hill. Also scales better, has lower operating cost, has lower capital cost, and has less energy loss.
Yes, using Microsoft SQL Server for Linux; hosted both on-premises with VMware and in Azure Virtual Machines - later migrated to Azure SQL Managed Instances. It worked great for the business’ needs. The major architectural advantage was that each Customer had a completely isolated Tablespace, easing compliance auditing. Each DB could be exported/migrated to a different Instance or Region, and migration scripts running slow for “whale” customers had no effect upon small fish. Monitoring of the Servers and individual Instances was straightforward, albeit very verbose due to the eventual Scale.
There were a few administrative drawbacks; largely because the MS-SQL Server Management Studio tools do not scale well to hundreds of active connections from a single workstation, worked-around through lots of Azure Functions runs instead. Costs and instance sizing were a constant struggle; though other engines like Postgres or even SQLite would likely be more efficient.
I have also seen this used in other formats quite successfully - Fandom/Wikia (used to?) use a MySQL database for each sub-site.
> I have also seen this used in other formats quite successfully - Fandom/Wikia (used to?) use a MySQL database for each sub-site.
Stack Overflow used it as well, with a database per site (DBA.StackExchange.com, ServerFault, SuperUser, Ask Ubuntu, etc.)
I have a bunch of clients using it. Another drawback with this design is high availability and disaster recovery can become more complex if you have to account for an ever-growing number of databases.
Location: United States, New York Metropolitan area
Remote: Sure
Willing to relocate: Within continental US
Technologies: Web, Networking protocols especially DNS, Virtualization and Clustering, Databases of all types from CSV and SQLite to CouchDB and MS-SQL, Revision Control/CI/CD (git and friends), distributed filesystems, many Languages
Résumé/CV: https://kashpureff.org/eugene/resume.html
Email: In Resume
Have been working across technical disciplines since the 1990s, always looking for Interesting Problems to solve. Please let me know if you have any questions about my background - I can guarantee an interesting story!
Computing Power has increased tremendously, along with the higher resolution of digital imaging technology compared to analog film plates. Sky Survey projects like the Vera C. Rubin Observatory have become active in recent years, which generate Terabytes of spectrographic data each night which can be rapidly examined for differences from previous captures. In the past each exposure had to be hand-aligned on a Light table and “flipped” between to spot differences.
Technology is constantly evolving. Most recently familiar with the PowerShell/C#.NET ecosystem, but not exactly looking to reprise that experience. Worked primarily with Interpreted languages ranging from Perl, PHP since version 3.1, Brainfuck, Python, Ruby, Lua, various flavors of Shell, and many more. Not scared of Compiled Languages or Assembly.
Employment History: Upon Request. Most recently at Microsoft - departed for Family Reasons.
I am seeking a Position which is Technically Interesting. I would like to be part of a Team working towards a common goal that improves Society, rather than a collection of Individuals connected only by their Salaries at a megacorporation. I am not searching for any specific Job Title - you may be looking for a Software Engineer, Business Analyst, Technical Manager, Security Researcher, Systems Architect, or maybe just a part-time Consultant. In my career since 1995 I have chased bizarre memory-alignment performance regressions in C code for a Physics library, built distributed database replication systems with leader-election consensus, chased timing issues within Multiplayer game protocols, implemented Financial audit controls to identify untrustworthy employee behaviour, built Monitoring & Active DevOps response systems for “Five-ish Nines” Uptimes of customer Environments, designed and installed redundant Low-Voltage and High-Voltage Electrical systems on Ships (DP2 standards) and in Data Centers, performed physical security penetration testing for Restricted Sites, and participated in a takeover of the Global Domain Name System’s root servers.
If you have any Questions, please send me an Email!
The NEXRAD weather radar system has multiple modes of operation (Volumetric Coverage Patterns) configurable for each antenna site. Each of these is optimized for different weather conditions. The light-blue returns represent humidity in the air (not quite rain or fog) and is usually tuned below the “noise floor”.
Throughout the late 90s, “Mail.com” provided white-label SMTP services for a lot of businesses, and was one of the early major “free email” providers. Each Free user had a storage limit of something like 10MB, which is plenty in an era before HTML email and attachments were commonplace. There were racks upon racks of SCSI disks from various vendors for the backend - but the front end was all standard Sendmail, running on Solaris servers.
I worked at a competing white-label email provider in the 90s and even then it seemed obvious that running SMTP on a Sun Enterprise was a mistake. You're not gaining anything from its multiuser single-system scalability. I guess it stands as an early example of pets/cattle debate. My company was firmly on the cattle side.
I was just the Teenage intern responsible for doing the PDU Cabling every time a new rack was added, since nobody on the Network or Software Engineering teams could fit into the crawl spaces without disassembling the entire raised-floor.
I do know that scale-out and scale-up were used for different parts of the stack. The web services were all handled by standard x86 machines running Linux - and were all netbooted in some early orchestration magic, until the day the netboot server died. I think the rationale for the large Sun systems was the amount of Memory that they could hold - so the user name and spammer databases could be held in-memory on each front end, allowing for a quick ACCEPT or DENY on each incoming message - before saving it out to a mailbox via NFS.
Makes sense, there are a lot of reasons why having some "big iron" might have been practical in that era. x86 was not a full contender for many workloads until amd64, and a lot of the shared-nothing software approaches were not really there until later.
Nowadays I would recommend using NFS4+TLS or Gluster+TLS if you need filesystem semantics. Better still would be a proper S3-style or custom REST API that can handle the particulars of whatever strange problem lead to this architecture.