My grandfather purchased a niche customer management DOS application in 1986 or so and used that Software up until the day he died in 2012.
When he was finally forced to upgrade to a computer without a parallel port, he was a bit stuck for a bit because the software would only print to a printer connected via parallel port. I couldn’t find a card for this at the time, for whatever reason, but I eventually was able to trick the software into printing to a modern printer and I had never seen him more happy since it saved him the $3000 software upgrade fee.
Without having thought much about this, surely datadog only want to store your data and have you pay for the storage/indexing/querying? I guess your worry is something like datadog making themselves the only possible backend? I don’t feel like that’s a very big risk – I think trying it would just lead to a fork of vector. Perhaps a more realistic risk is that vector would implicitly assume datadog’s constraints, eg (making these up without knowing much about datadog) field types or required information or the expected number of unique fields across all messages.
Yeah the trick is if they can lock you into a stack that sits everywhere in your apps, it’s very expensive to switch vendors, letting them extract high rents. This is what happened with the Datadog agents.
In that context, OTEL is an existential threat, because it makes them a commodity. Then it becomes relatively clear why they wouldn’t put OTEL support in the Vector roadmap.
I guess I’m surprised that your claim is basically that datadog’s advantage is in ingestion. I would have assumed they would be focusing on trying to make a product so good that people wouldn’t want to switch from it. Vector supporting multiple backends would be good for datadog if it can get more people in the door, so long as their product is compelling enough for people to stay.
I don’t know what exactly you mean about otel but elsewhere in this comments section someone linked to upscale, which uses vector to collect otel logs. Is that a counterexample?
My experience with using Datadog at (some) scale was that they focused on making it really, really easy to integrate their agent with your apps, and then once they had a large base of users with high switching costs they started rapidly raising prices.
In other words: My claim isn’t that they were better at ingestion but at onboarding and at creating switching costs.
Since that was how their leadership acted last time I used their code, I expect the same leadership to act the same way again with this other piece of code they own.
Given my experience with Datadogs pricing lock-in-and-switch, yeah 100% I’d rather run agents that allow me to pick the collection backend than another tool from Datadog.
For the first time in ~8 years I was able to watch old flash content I had lying around and all I needed to do was link to the ruffle JS library. Worked flawlessly.
Thanks for letting those videos live again, Ruffle team!
I was a drive-by contributor of a few PRs to Ruffle out of love for old flash videos and one of my favourite users is The Internet Archive. It’s wonderful that this stuff is preserved in the active sense and that you no longer need to view YouTube/video renders of flash content to see all the old animations.
Obviously the vector graphics is both lightweight and scales beautiful, but somehow the loading screens and various bits of interactivity in primarily video content are almost as important to me at least.
This has saved parts of my childhood from the brink of extinction.
An issue is that only the old stuff works, because AS2 has been implemented. The newer (imo) creative stuff still doesn't work due to spotty AS3 support
yeah, I was excited to revive Homestuck's interactive parts, but they were all AS3. I hacked around with Ruffle to see if I could add enough to get them working, but at the time AS3 support was an empty skeleton.
have they gotten some AS3 working? if so I might revisit.
I made this because I got sick of seeing passwords copied around in emails and on slack. Normally I would say "we should use GPG," but that's not always super user friendly for non-technical folks.
It encrypt's the body of the "secret" using a default passphrase, and you can optionally set your own to encrypt and "lock" the secret.
I opted to not add in any support for creating users at this point since my intention was to have a company run this internally on a private WAN, although I am sure my position on that will change.
At a previous company I worked at we used to manage our own email infrastructure before finally switching over to a dedicated service. There are a few problems if you are sending over bulk email that is customized per user when running your own SMTP service.
1. IP address reputation -
Keeping your IP addresses reputable is not a simple task. It requires balancing your emails for popular destination domains (gmail.com, aol.com, yahoo.com, etc.) across multiple external IP address. It requires you to deal with many different conflict resolution departments, who don't care about email, when a dispute comes up. It's practically a requirement to use a service like ReturnPath to maintain your reputation.
2. Throttling -
When doing it yourself you need to throttle yourself. This is problematic on "big" days, especially when your marketing department wants to send many millions of emails for a big product push, promotion, or on days like black friday/cyber monday.
3. Hiring -
A lot of people think sending email is easy. When you get up to the multiple million per day mark things start to fall apart. Do you have someone(s) on staff who really know sendmail/postfix/qmail inside and out?
4. Monitoring -
sendmail/postfix/qmail are often times hard to monitor. You have to put together all of your stats. You have to put together all of your alerts. If you aren't really experienced with bulk email, you won't know what to look for and that can impact your reputation. Also consider your logging infrastructure. sendmail/postfix/qmail are noisy.
5. Cost -
All of the points above play into the cost aspect of it. Is it cheaper to run it yourself, pay for all of the services and salaries, etc. Or is it actually cheaper to just use sendgrid/mailgun/etc. IP address reputation services are not cheap. Infrastructure cost is also something to consider. AWS IPs all have pretty terrible reputations so running this in AWS (and maybe other cloud providers) is a non-starter since no one will accept your email.
If you've got the expertise and you are sending a massive amount of emails then it might be worth it to run your own infrastructure, but at the end of the day, a single developer consuming an API is often easier and less problematic.
The other important one, at least in the US, is unsubscribe/CAN-SPAM compliance [0]. You've got 10 days to comply or you'll be at risk of up to a $41k fine per email!
When he was finally forced to upgrade to a computer without a parallel port, he was a bit stuck for a bit because the software would only print to a printer connected via parallel port. I couldn’t find a card for this at the time, for whatever reason, but I eventually was able to trick the software into printing to a modern printer and I had never seen him more happy since it saved him the $3000 software upgrade fee.
Miss you, Grandpa!