I used to be a big proponent of jQuery especially in the heyday of shims and browser hacks, but in the last few years I find it often gets in the way of what I'm trying to do. Now that the native browser APIs are maturing and relatively consistent, having direct access to the objects and their properties is simply more predictable than having to second-guess a layer of abstraction that does the same job but differently.
I have to remember, what does jQuery's .hide do again? It doesn't just set display to none or visibility hidden. Give it a duration, and it will use the style attribute to manipulate the display, width, height, opacity, padding, margin, etc. Then it leaves some style properties behind. Ugh. Do I really want to do all that stuff? Do I really want to build my UI framework around jQuery so I can avoid annoying transition artefacts?
Not hating on jQuery. Just my own experience. I used to feel liberated when using it because browser APIs were so terrible but now I feel encumbered if it's included as a dependency on a project I'm forced to work with, because it does so much black box magic. If you avoid certain things and stick to what it does well then it's not bad, but then there's no point using it because what it does well is no longer a pain point in browser APIs.
For me the main thing it excelled at was DOM selection and manipulation. Native does that just as well now. The secondary benefit was animation, which these days native CSS can go a long way without excess verbosity, and if you really need a more feature rich animation library I've not come across any better than Greensock for getting the job done, even if it does have a paid tier, though I am sure there are dozens of other libraries equally suited for animation, the point is that is not jQuery's strength either.
Anyway circling back to what you were saying, efficiency-wise, for any complex animations jQuery isn't the best tool for the job. For DOM manipulation native can be just as easy with some sugar. It can do a lot of unexpected and hidden things most people aren't aware are happening, so many bugs and time wasted realising jQuery was messing with style attributes that break an otherwise well designed layout.
I would rather write a few characters more code or a couple of lines more to have direct control over what's actually happening. That seems more efficient to me.
Edit: Ok on reflection I guess I am against jQuery, but I don't hate it. Using it these days just feels like trying to figure out how to get it to do what I want to the underlying APIs, when I could more easily and predictably just be manipulating the APIs directly.
Usually I'm not bothered by how a font looks but this one was unpleasant to me as well and I'm not one to usually notice. It seems they're using a font face designed for titles in the body text.
"“The Bodoni typestyle is not an all-purpose workhorse. It is, rather a high-strung thoroughbred,” says Allan Haley, Monotype’s director of words and letters. He is absolutely correct! Most currently available Bodoni designs are intended for, and best used for display sizes. Their extreme thin strokes won’t reproduce well at smaller sizes, which can degrade their appearance and lessen readability. Using a Bodoni outside of that version’s “sweet spot” size range can have unintended results: If you’re using one that is intended for display, but setting it at small point sizes, the thins might not hold up, spacing will most likely appear too tight, and overall readability will begin to be compromised. " - https://creativepro.com/typetalk-good-looking-bodoni-at-any-...
A friend once heard Herman Zapf say that he had not intended Melior--or perhaps Optima--as a text font but a display font. Some of the audience, which consisted largely of people working in book design and production, was shocked.
This is not going against what you said in principle, I feel what you are saying, but I feel the need to add that I was admin of a few topic oriented message boards in the 90's and early 2000's, and yeah as you said it was pretty good simply to be able to be part of some group that cared about the same random stuff as me, and I think a big part of it is because humanity literally had not ever been able to connect so quickly on shared interests from such geographically diverse regions before, but it wasn't all roses there either.
Personality cults were a regular theme. Honestly just one individual with no other goals in life could wreak havoc by constantly weaving between the rules, launching sock puppets to do some virtual Munchausen-by-proxy performance, painting admin as the bad guys, staging crises that didn't really exist to get more followers (in the social sense, there was not really a "follow" option in the platforms at the time). These topic-based forums were often in direct competition, and on more than one occasion it was revealed (usually by infiltrating via long-term social engineering so you could get to see the IP addresses of the members) that these users were from competitors trying to stir trouble and siphon off members.
Diversity and cliquishness was an issue. Generally a community would kick off around some exciting new theme, or just a general shared interest and grow organically from there. This was great but the longer the same group hangs together, the more insular the atmosphere and inside references became. It's just what groups of people do in physical groups when they hang together a lot - they grow bonds with each other, their shared experiences strengthen these bonds, and newcomers see this and can see that it will take a lot of effort and patience to reach the same level of acceptance, and the older and more insular a community becomes, the less people are attracted to it. Then eventually the older members see there's nothing new to learn here and drift off. So the lifecycle of topic-based message boards followed a standard inception/growth/stagnation/diaspora pattern.
Generalised social media puts everyone on an even platform - albeit a pretty shitty one - everyone sucks equally by default. You're correct in that the centralisation has a ton of other side effects and I don't disagree that many of these aren't what people want (if they're aware of it). Just that as I said it wasn't all roses and we can't just "go back". There were tons of reasons why the topic-oriented message boards faded away and it wasn't just laziness or convenience. It is human nature to desire connection and a sense of place, balanced with a need for novelty and invigoration of ideas. Generalised social media provides that routinely and formulaically, they basically hacked our brains.
Also on practicality of your suggestion, we can't force people to go back. You can't put a gun to people's heads and force them to only use single issue forums. I get the nostalgia because I was a part of it and it was great for a time, but it did also have a ton of downsides.
I think we need to move forward not go back. Federated social networks are one attempt at this. It's a lot to take on board as we have to learn new things like managing our identity / signatures and learning differences between providers, but efforts are underway to try and shift us away from the big old attention silos people have been trained to use these days.
I never ran an popular board. I had ran some small ones, mostly folks I knew in real life. I've also ran several in-person clubs over the years, which obviously don't scale to the same degree. But I do have some inkling of the issues you're talking about.
The newess of the whole thing is a great point. I keep hoping that Internet culture as a whole will invent a new sense of manners. (At the risk of being accused of being an Eternal Septemberist, which was actually before my time) 'Member when people talked about being a "good 'netizen"? We had trolls, but they knew what they were doing was against good manners (indeed, that's why they did it).
Somewhere along the line, people stopped getting on-ramped onto the Internet. They got dumped on instead and the only role models they had were other folks who couldn't see the humanity behind the handle.
A lot of the issues you talked about still exist in general-purpose social media. Indeed, the platform reinforces it, as it gets to know your political proclivities better and pushes you into their engagement bubbles.
I think the decentralization is a bulkheading against those issues. When they happen--and they will happen--the limited scope of the topic board limits the damage to that subculture. It doesn't impact the Whole Damn Nation. Can you imagine someone like Donald Trump winning the presidency without a one-stop-shop of advertising and propaganda dissemination that Facebook provides? You don't even have to spend that much money, you can get the people to organically self-sustain it with the right meme seeding.
We had competing boards, too. I was active on two different game development boards. There were more that I just didn't bother with. If one started to feel like shit, I could dump over to another one. There was some continuity, but it wasn't absolute.
IDK. I know I'm probably rose-colored-glasses on the issue. And you're right, there's no putting the cat back in the bag. Maybe the bigger problem is that most people really are shit and smartphones gave them access to the internet. "Garbage in, garbage out". But it seems like they'd all be fighting it out on the ESPN boards, away from my eyes, if it weren't for general-purpose social media.
Many of the projects I worked on would have a near-identical replication of an environment from the network stack to the application and databases. Flipping from staging to production was sometimes as simple as a DNS update. It's always an eye opener to see businesses at this scale operating without a full replication of production in staging. It's always harrowing when you're testing a destructive change on dummy data just knowing there's a million ways a live deployment could go wrong, and the impact just scales the bigger you get so that kind of redundancy just seems even more important.
It could be argued at the scale of a company like Atlassian that this level of redundancy is prohibitively expensive, that's a lot of databases and files to have sitting around doing nothing 99.99% of the time, and it's hard to argue for prevention of something that's never happened and would be a costly thing to tool up for. But you can definitely factor scaling your redundant capacity into your model, both pricing-wise and engineering-wise. It's not like Atlassian products are cheap to begin with, I'm sure they can sustain some velocity / bottom line hit for the sake of something as basic as fully replicated staging environments. I definitely don't think this is on the engineers, it's a strategic oversight and shows where ultimate priorities lie within the company.
Putting your trust in a cloud service to take care of things you'd otherwise have to worry about yourself is a major decision, and safety is one of the top priorities of basically every user, and seeing the lack of process and glib approach to staging is a major red flag.
Anyway that aside I do appreciate their detailed write up and it does feel like a bluntly honest and truthful disclosure. That goes a long way to restoring trust, but it does also expose some of how the sausage is made and it's clear some of the ingredients are questionable. It does bear the hallmarks of a small successful software startup hitting the big time and scaling with acquisitions faster than supporting processes can safely scale; they have a team of engineers and it's up to them where to engage them and it seems being able to do proper dry runs of destructive changes wasn't seen as more valuable than getting more services on the products page.
Hopefully they'll act on the recommendations of the report and implement the improvements they said they would and not just refocus their efforts elsewhere once the spotlight moves on. I'd like to see regular updates on this as a long-term Atlassian user as it would factor greatly into me recommending Atlassian products over other stacks in the future. They could easily set up a public Jira / Trello board so we can keep track of progress on these promises.
Obviously this is not unique, these mistakes have happened before, so it's not just the kind of stuff that seems obvious in hindsight. I am sure there were engineers highlighting these issues internally but scaling redundancy is never as sexy as onboarding a new product and adding its customers (and revenue) to your quarterly reports. Hopefully the reputation hit is a stark reminder to the c-suite that yes, they are running a technology company, and that means that technology and engineering should be just as important as growth and penetration.
Anyway, good on them for being open. Well done to the engineers who worked to untangle the mess, good on management for allowing this level of transparency and taking ownership, things could have been a lot worse by the sounds of things.
Telemetry is the catch-all phrase used to describe data captured about a device.
Telemetry servers combine this information to build profiles on users. Network models of relationships between devices and user behaviours are used in recommendation engines including autocomplete results.
If you are in the same room as a cohort of people, if one or more people had been searching or messaging the X & Y Smiths, then by geolocation since you all attend the same church, you're associated as potentially interested, especially considering you already had their details saved.
There are also fuzzy logic factors, like maybe those three letters weren't in their name but were in their phone number (each number corresponds to several alphabet characters), or might have been in a message you've long since deleted but is still in your autocomplete index which combined with the geolocation weighting could have caused it to pop up as an option.
In these instances it might appear your phone is listening, but you were in the same room as some people who were probably also interested in that family, had their data in your history, and being a new connection could have boosted its relevance too. (I just saw someone else also answered that you probably only noticed this event because of the conversation, or the Baader–Meinhof phenomenon, which is also plausible)
Not saying definitively that your phone was not covertly listening but our devices are capturing and correlating a massive amount of dimensions related to our behaviours at all times, so it's not beyond reason that enough of these factors lined up to cause the autocomplete engine to suggest it as a reasonable option.
I could definitely believe it was deep telemetry as the intended message recipient did have proximity to the unexpected 'match' earlier in the day. I am 99% sure they hadn't swapped contacts though, so it wouldn't be as easy as a matched contact connection.
Interesting idea about the mapping of numbers to letters, oh how I miss 800-call-sal advertisements.
I was reaching on the mapping of numbers to letters to be honest. If I were designing a system to track and corelate everything I'd take it into account since it's still in use, but yeah that was a definite stretch in this case.
But I am pretty sure this was just a proximity match. When you're dealing with the quantity of telemetry the big players deal with, you're talking billions of people in real-time all day long, then how do you figure out what's relevant and what isn't?
Physical proximity is important. You don't have to swap contacts with someone for your telemetry to connect you with them. I mean you were right next to someone who was right next to them, out of billions of people you're relevant, so no one needs to share contact details. The manufacturer of the phone knows your geolocation. Your telecom company knows your geolocation. If you have bluetooth or wifi switched on and you're already fingerprinted, then every chain store knows your geolocation. If you use a credit card or eftpos card anywhere, the products you purchase are combined into your profile, etc etc.
That and you already had them in your contact list (even though you were surprised they were, you're not saying they weren't, I have people in my contact list from 15 years ago I only spoke with one time...), they already know that you've bumped into this contact before in the past, and boost the recommendation because you shared the contact and the geospatial relevance in a short period of time.
Like I'm pretty cynical and suspicious at the best of times, but once I started to realise the above, all my "oh shit they're listening" moments kind of dissolved because I could trace all of them back to being in the same room as someone who had met a person, or had been actively searching a related topic in the past few days.
Yeah it's still spooky, it's the reason I run a pi-hole, and got myself off most social media.
Also I noticed this thread got flagged. Not sure exactly why but I think it's because this same subject has come up a few times. I do think people need to better understand how network analysis can reveal spooky shit about our behaviours, like our devices don't need to be literally listening to our words in order for corporations and governments to know exactly who we are or what we're about. There's tons of different signals we all send out each day that fingerprint exactly who we are, who we're related to, and what we care about, they don't need realtime voice processing.
Yea, I understood the number mapping was a stretch... just a fun memory.
Thanks for laying out a very plausible case for how this match could occur without actually listening to the ambient conversation.
I also run a pi-hole at home and almost never visit social media. Frankly I am surprised it took this long for people to understand how giving up their privacy had a cost, I was hoping the backlash against Facebook et al was going to start a decade ago.
Thanks! The game here is written in Inform7, and it was so difficult to make progress that it took me three years on and off to finish the game. So if you do it that way, beware :)
If permanency is a priority then letting external scripts be responsible for presenting content is not a good idea, especially if the agreement doesn't make any promises about whether content will be permanent, and doubly so if the agreement / terms of service explicitly say they can change the behaviour of their services at any time.
What this probably calls for and maybe something is out there is some service that can embed, archive, and track changes to a tweet or social media post. You'd embed the same way, but the archive will fetch and cache the content. It could then serve up the original version, as well as a timeline of changes.
The right to be forgotten has merit though, and I can see twitter's logic there and probably they're under pressure via GDPR or something. So any archival or cache service would need to take that into account. Various countries and districts have varying laws on what is and isn't official public record too, so it seems like managing that could be the function of a dedicated archival service.
This sounds like something I would write if a hypothetical gun was pointed at my head in a company where the most prominent customer complaint was that time spent in QA and testing was too expensive.
I have zero trust in any company that deploys directly from a developer's laptop to production, not in the least starting with how much do you trust that developer. There has to be some process right?
> company that deploys directly from a developer's laptop to production
Luckily, there's no sign of doing that here. There's no mention of how their CI/CD works, probably because it's out of scope for an already long article, but that's clearly happening.
"We only have two environments: our laptops, and production. Once we merge into the main branch, it will be immediately deployed to production."
Maybe my reading skills have completely vanished but to me, this exactly says they deploy directly from their developers' laptops to production. Those are literally the words used. The rest of the article goes on to defend not having a pre production environment.
They literally detail how they deploy from their laptops to production with no other environments and make arguments for why that's a good thing.
It says they "merge into the main branch" and it will be immediately deployed to production presumably via CI/CD system that detects code changes and does the necessary dirty dance.
Tangential but what is the most successful and most complex example of a fully FP-based application where I can view the source-code today? I feel like I still don't quite grasp the essence of the conflict between FP and whatever it seems to be opposing. I also feel like I use what a lot of FP people claim as FP in my day-to-day.
I think if I can see some code beyond the old map/reduce toy examples it might click. Not wanting to be confrontational, and I know that this forum isn't here specifically to educate me personally, I just genuinely have had problems understanding what FP is "supposed" to be that any good programmer doesn't already do when possible, regardless of language or whatever.
I'm sure it will click and I'll look dumb but yeah. Isn't loose coupling and strong coherence kind of covering this already? Is it just down to doing map/reduce/filter to avoid mutating some variable and instead get a new set of results? Is mutation the charm?
I genuinely really feel I'm missing out on something important here because I have older views on programming, I've struggled to get a good explanation. Do I have to go back to school again? I'm not against that but yeah. I'm sorry if this sounds ignorant, but I kind of am.
I used to be in your shoes. What helped me was learning Haskell and watching all YouTube videos by the Clojure creator (what’s his name again?) It was hard for me to get it (old dog learning new tricks). But I persisted and suddenly the light bulb went off and it all became super simple and easy. It was a hard journey but worth it. I am a much better programmer today, in any language, because of it.
Possibly a lot of the 40k went into branding and other fluff, but something in the core engine might have intrinsic value even if its extrinsic value wasn't properly realised or couldn't find a place in the market as a standalone product.
It would be nice if there were some kind of universal project "retirement home" where these sorts of things could go to pasture, caretakers could prod at the idea and pull it apart, find the intrinsic value if any, and integrate the guts of it into some broader universal API. Such a platform of trinkets that solve tiny everyday problems that individually have little value might actually be useful in any sort of recommendation engine. API calls could be metered so the original creators of each piece get some kind of return however small.
Give me 40k and some failed IPs to play with and I'll deliver a prototype in a few months...
const $ = document.querySelectorAll.bind(document)
Then later:
$('#element').whatever
I used to be a big proponent of jQuery especially in the heyday of shims and browser hacks, but in the last few years I find it often gets in the way of what I'm trying to do. Now that the native browser APIs are maturing and relatively consistent, having direct access to the objects and their properties is simply more predictable than having to second-guess a layer of abstraction that does the same job but differently.
I have to remember, what does jQuery's .hide do again? It doesn't just set display to none or visibility hidden. Give it a duration, and it will use the style attribute to manipulate the display, width, height, opacity, padding, margin, etc. Then it leaves some style properties behind. Ugh. Do I really want to do all that stuff? Do I really want to build my UI framework around jQuery so I can avoid annoying transition artefacts?
Not hating on jQuery. Just my own experience. I used to feel liberated when using it because browser APIs were so terrible but now I feel encumbered if it's included as a dependency on a project I'm forced to work with, because it does so much black box magic. If you avoid certain things and stick to what it does well then it's not bad, but then there's no point using it because what it does well is no longer a pain point in browser APIs.
For me the main thing it excelled at was DOM selection and manipulation. Native does that just as well now. The secondary benefit was animation, which these days native CSS can go a long way without excess verbosity, and if you really need a more feature rich animation library I've not come across any better than Greensock for getting the job done, even if it does have a paid tier, though I am sure there are dozens of other libraries equally suited for animation, the point is that is not jQuery's strength either.
Anyway circling back to what you were saying, efficiency-wise, for any complex animations jQuery isn't the best tool for the job. For DOM manipulation native can be just as easy with some sugar. It can do a lot of unexpected and hidden things most people aren't aware are happening, so many bugs and time wasted realising jQuery was messing with style attributes that break an otherwise well designed layout.
I would rather write a few characters more code or a couple of lines more to have direct control over what's actually happening. That seems more efficient to me.
Edit: Ok on reflection I guess I am against jQuery, but I don't hate it. Using it these days just feels like trying to figure out how to get it to do what I want to the underlying APIs, when I could more easily and predictably just be manipulating the APIs directly.