Hacker Newsnew | past | comments | ask | show | jobs | submit | bigmadshoe's commentslogin

Capital ≠ income

I’m really no expert on sharding but if you’re using increasing ints why can’t you just shard on (id % n) or something?

Because then you run into an issue when you 'n' changes. Plus, where are you increasing it on? This will require a single fault-tolerant ticker (some do that btw).

Once you encode shard number into ID, you got:

- instantly* know which shard to query

- each shard has its own ticker

* programatically, maybe visually as well depending on implementation

I had IDs that encode: entity type (IIRC 4 bit?), timestamp, shard, sequence per shard. We even had a admin page wher you can paste ID and it will decode it.

id % n is fine for cache because you can just throw whole thing away and repopulate or when 'n' never changes, but it usually does.


^ This

Square is primarily a payment platform so you probably have used your credit or debit card thousands of times with them already.

Yup. Credit. Or Debit. They can play with crypto behind the scenes all they want. That doesn't make it worth anyone's time.

Presumably they were paid for finding the bug and inn accepting relinquished their right to blog about it.


No, you relinquish the right when you agree to their TOS irrespective of if they pay you.


TOS != law

They will stop letting you use the service. That's the recourse for breaking the TOS.


I don’t want to pay for a lawyer to argue that for me. != law does not equate to ‘won’t come with a cost’.

I say this as someone threatened by a billion dollar company for this very thing.


Up until Van Buren v. United States in 2020, ToS violations were sometimes prosecuted as unauthorized access under the CFAA. I suspect there are other jurisdictions that still do the equivalent to that.


Being a sellout is weak and sad.

Yeah the needle in a haystack tests are so stupid. It seems clear with LLMs that performance degrades massively with context size, yet those tests claim the model performs perfectly.


As someone who abuses gemini regularly with a 90% full context, the model performance does degrade for sure but I wouldn't call it massively.

I can't show any evidence as I don't have such tests, but it's like coding normally vs coding after a beer or two.

For the massive effect, fill it 95% and we're talking vodka shots. 99%? A zombie who can code. But perhaps that's not fair when you have 1M token context size.



I'm the same and I tolerate vegan D3 (from lichen). Some people are sensitive to the sheep-wool derived regular D3.


I will give it a try thanks for the suggestion


How could price per token not be a concern for any “multi-billion” or “multi-trillion dollar” business? Do they just burn money to remain profitable?


You'd be surprised.


Then make the act of selling it or storing it in a database with the intent to track people illegal?


Basing it around the act of selling data seems like a much better approach to me than what OP suggested, I agree. I imagine there are edge cases to consider around how acquisitions of company assets would work, although it’s not a use case I particularly care to defend.

“Intent to track” could be an approach, but the toll bridges near me use license plate scanners for payment, so I could see it not being that clear cut. There are likely other valid use cases, like statistical surveys, congestion pricing laws, etc.


This sounds a lot like the programs encoded by neural networks.


Presumably they just render the absolute error between the lhs and rhs of the equation for every pixel in the plot.


Yep - it's just |left-right|^fuzzyLevel


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: