Hacker Newsnew | past | comments | ask | show | jobs | submit | rjtobin's commentslogin

There is at least one reason - it was a harder problem. Agreed that which IMO problems are hard for a human IMO participant and which are hard for an LLM are different things, but seems like they should be positively correlated at least?


IMO problems are not hard. They are merely tricky. They test primarily pattern recognition capabilities, requiring that flash of insight to find the hidden clue.

So it's no wonder that AI can solve them so well. Neural networks are great at pattern recognition.

A better test is to ask the AI to come up with good Olympiad problems. I went ahead and tried, and the results are average.


Small point but the last link here (from 2008) is a different project with the same title.


Thank you!


I interpret the article quite differently. The triangle example (which as the author writes is actually an example from “Lockhart’s Lament” on American math education) isn’t about whether triangle-area-formula was ever justified to students. It’s that students aren’t given the chance to really ask the question themselves - a chance to approach math, or biology, in the way that a mathematician or biologist does in reality: trying to work it out for themselves, and being invited to wonder at just how improbable textbook biological facts are etc.

I agree with your point that some students will be more inquisitive, and will need less prompting to do the above thinking themselves. But many (most?) students are not like this, and it’s a shame that many of these students could enjoy a subject that they instead come to loathe.


I think the idea is that early Protestants like Calvinists believed in predestination - that it was already decided who would was going to heaven. One would think this would make the predestined not care about their actions, but the trick is that no one was sure who the predestined were. Since it was believed that the predestined would be exemplars of the faith, people would act like they thought the predestined would act.

I’m not a psychologist or a theologian, but it sounds related to that concept in child psychology: when a child misbehaves, it’s better to say “you did a bad thing” than “you are bad”. The latter leads to the child labeling themselves as bad, suggesting that’s the natural way they should act.


There is an interesting and seemingly not well-understood link between loss of sense of smell and decreased longevity: https://www.health.harvard.edu/diseases-and-conditions/poor-...

Wild speculation, but maybe there is some key circuitry in our brains that also doesn’t activate if we cannot smell?


I think the interpretation of "you don't get anywhere" is the crux. There are a few games I've put a few hundred hours in, and those hours were mostly fun and exciting. I gained skills in the game, some of which are transferrable to other things, but most of which aren't.

If I was to put the same time into a new programming language, I'd have gained a different set of skills, again some transferrable and some not. I think most people judge the second set of skills as more worthwhile than the first (maybe because it is tied to employability, and is seen as more "age appropriate").

Another aspect is that programming isn't designed to be addictive in the way some games are. Something like World of Warcraft seems designed specifically to hook into the part of the brain that rewards grind. In the moment, WoW is very compelling. But looking back on what was accomplished, I don't feel particularly positive about it. Compared to a programming project with a similar time commitment, in the moment it is on average less compelling probably, but the achievements are more satisfying in retrospect. Somehow this makes programming seem more worthwhile (though maybe I have just internalized the societal standards from above, and that's why I feel more satisfied with the programming projects).


I'd argue that fulfillment is the variable here and it seems to be what you're describing.

Spending a weekend on a programming project for fun can almost feel like the height of self-actualization to me. A pure intellectual challenge that makes me feel good and gives me a real proving ground. And I almost always look back on it and am proud of my effort even if I don't accomplish what I wanted.

Meanwhile it's too easy to spend empty, unfulfilling time in games. My time in World of Warcraft was wasn't self-actualization but more narcotic entertainment. It was less challenge and more empty feelgood treadmill. It felt good at the time, but not a year later. And it was addicting and easy in a way that programming (creation, exertion) is not. Even when I got into add-on development (which produced some fulfillment), that was only a small fraction of my time spent playing.

I don't think the point needs to be that one is always good and the other is always bad. But my advice to my former self and young people would be to beware of games. Ideally we all have a creative outlet that gives us real fulfillment. Even if you get sucked into games, hopefully it can redeem itself as a creative outlet rather than leave you in a narcotic-like rut.

I think Joe's comments make perfect sense if you see it as a letter to his former self and thus young people in general. I just remember when I was a heavy gamer, I hated acknowledging that yeah, I kinda am wasting my time, and it's not fulfilling. My parents were right, but it's a hard pill to swallow when it's one of the only things you've found that excites you (and you've stopped looking).

On the other hand, I think a lot of life advice is too hard to apply and better experienced yourself. Sometimes you just have to spend a year in an MMORPG in your teens, quit it, and then think "yeah, what a waste of time that was." And you learn a good lesson about how you want to spend your time. The danger is that I have two friends in their 30s who never quit, and it shows. Never had girlfriends, never traveled, slave to games, seem miserable -- and it's hard for me to see how they are setting themselves up for life fulfillment. And I think that matters.

That's all I have to say on the topic. To avoid spamming my PoV any more in these threads, this was my last one.


The cartoon picture is that the first example will read everything into cache once, whereas the second example will read everything into cache twice.

Cache lines are typically 64-bytes, so to write a single character to main memory the following things happen (again, a cartoon picture): First read the 64-bytes area that contains the byte of interest so that it is owned by my cache (this is called a RFO, "read-for-ownership"). Second, update the byte of interest. Thirdly (at some point) write the cache-line back to main memory.

In the sequential case, we just read one 64-byte cache line at a time, update those 64 chars, then write the cache line back to main memory.

In the second example, we first update all the even-indexed characters, which still forces us to read in every cache line. Then we loop around and do the odd-indexed characters, at which point we have to read the cache lines all over again (assuming the array is big enough that the whole thing can't fit in cache at once).


Am I misreading the second example's algorithm? Isn't it allocating like this:

1, 2, 4 3, 8 7 6 5, 16 15 14 13 12 11 10 9, ...

And so on?

---

Also this part:

>Cache lines are typically 64-bytes

Right, but I thought when you access an index it caches quite a lot more than 64 bytes from the index. Doesn't it throw a larger chunk of the array onto multiple lines? If that's the case then the first example is making very efficient use of the cache. If the modern CPUs are smart enough to cache backwards and I understand the second example, isn't the second too?


Ok turns out I was way off, it's actually completely broken. Just ran the code, printed j at each index.

>>> main(20)

2 4 8 16 12 4 8 16 12 4 8 16 12 4 8 16 12 4 8 16

It's not even hitting odd indexes. Over half the array will be garbage at the end. I guess that would count as out-of-order though.


Yep, I misread it also: I saw j = 2 * i (which would do evens and then odds when NUMBER is odd, or evens then evens again if NUMBER is even).

For what it really is - powers of 2 mod NUMBER - when NUMBER is large most reads should be out of the cache. So the first example has to read from main memory only every 64th index, and the second example has to read from main memory on almost every read. I think this agrees with what you are saying. This also explains why it is ~5x slower, which seemed too large from my previous understanding.


>So the first example has to read from main memory only every 64th index

Wouldn't it be 8 per cache line (an int is 64 bits, each cache line is 64 bytes)? I'm also assuming it caches a larger chunk of the array across multiple lines. Is that not how it works?

But I think there's a more fundamental issue here, which is that the amount measured, 68 million bytes in a second, is what - 60Mb? Did he just reduce the array size until it completed in a second? Because a very significant chunk of that is going to fit in L3 cache (on an i7 it's 8Mb), so even if you had a good random access algorithm, it would understate the problem because the data is still contiguous.

Which seems kinda dumb to me, since the real-word problem you're likely to run into is when your data is stored non-contiguously because it's scattered across multiple different structs/objects, making it impossible to utilise the cache to a significant degree at all. In that (very common under OO or interpreted languages) situation I'd expect a way more dramatic slowdown.


Maybe his wife, Fan Chung, who was also a close collaborator of Erdős (13 joint papers), or one of Ron's students will continue the tradition (maybe Steve Butler).

Very sad news in any case.


Doesn’t seem like there’s enough evidence that this is actually happening. The video from the reddit post has scaffolding to the left. And why surround the bricks with fences and signs?

The reddit poster dismisses this with “look at the other links people have posted here”. These are basically 3 or 4 photos or grainy videos of bricks in a city. Some photos don’t have bricks at all.

There is the supposed video of police leaving bricks. They seem to be examining bricks in the back of their car, and lining a few up on the side of their car (not on the roadside). Not very convincing.

Not saying this isn’t happening, but the evidence is literally a few photos of bricks in cities at this point. I expect most of the time we pass by piles of construction bricks without noticing them


Evidence left me with the same idea. With the amount of construction going on there’s bound to be a pile of bricks somewhere

Also cops were being very careful with bricks and in no hurry to dump/unload

Unconvincing


I'm somewhat skeptical. Firstly, I don't think the language is the problem with scientific code. You can write messy code in any language. So the warning then has to be about writing software in general. In that case, I think a warning like "don't try to write software unless you have years of training" is a bit much. Many people with no training learn to write nice code. Many projects made by amateurs might have ugly code but still add something to the world (eg. many games).

The problem here is the project is influencing decisions in healthcare.

Having worked in HPC and academia, I've seen code like this a lot. There are two archetypes I've noticed: (1) the well-meaning older academic maintaining legacy code, who have often done a lot of convergence testing, but still have code that isn't up to modern engineering practices, and (2) the domain experts with the attitude that "programming is much easier than my area of domain expertise". These are problems that require attitude changes within academia, not better warnings on online tutorials. The second group are going to ignore the warnings anyway.

Remember many of the people writing this academic code also teach programming courses in their departments! They view themselves as programming experts.


I'd like to point out I've met a lot of software engineers that subscribe to two in reverse. "This domain area of expertise is much easier than programming, ergo I am qualified to solve it."


I think it's true of many experts, that they see their own area of expertise as the most important one, and the others as relatively minor.


This model didn't just influence decisions in healthcare. It single-handedly changed the UK government's strategy over this pandemic.

From what I understand the UK was planning on beating COVID by creating herd immunity, similarly to Sweden. Then this model came out and everyone started yelling that Boris wanted to kill your grandma.

The problem is that it's impossible to have an intelligent discussion over this. This pandemic became a partisan issue. We're not discussing whether one of the most impactful decisions made by a government this generation should be based over absolute trash code. You're either uncritical of the lockdown or "anti-science".


> creating herd immunity, similarly to Sweden. > ... > The problem is that it's impossible to have an intelligent discussion over this.

As far as I can tell the Swedish government never had this plan. It was mentioned in an interview and dismissed as unworkable, journalists misunderstood.

On the other hand the UK government appears to have had no plans whatever until jolted into action by the fear that public opinion would turn against them.

What Sweden has done is similar to Norway, where I live, which relies largely on voluntary changes in behaviour and temporary closure of institutions and businesses that require close contact between employees and customers. But Sweden took longer to implement those measures and also Swedish society is different from Norway, anecdotally Swedes seem to me to be more urban people than Norwegians and more gregarious.

Exactly why Sweden has a much higher death rate, 36/100k inhabitants versus 4.3/100k in Norway, is unclear at the moment partly because of different definitions but also because of differing conditions, and the epidemic being at different stages in the two countries.


It seems the government was following this document at the start: https://assets.publishing.service.gov.uk/government/uploads/...

The reason it seemed they were doing nothing are these passages:

ii. Minimise the potential impact of a pandemic on society and the economy by:

• Supporting the continuity of essential services, including the supply of medicines, and protecting critical national infrastructure as far as possible.

• Supporting the continuation of everyday activities as far as practicable.

• Upholding the rule of law and the democratic process.

• Preparing to cope with the possibility of significant numbers of additional deaths.

• Promoting a return to normality and the restoration of disrupted services at the earliest opportunity.

There's way more, but I've honestly not read it all. But there was a plan, drafted before this epidemic.

Public opinion was turning against the government, but it actually kept course for some time. Something I was honestly impressed with. What made it drop the plan was Neil Ferguson's study.

There are many reasons for criticising the plan. This article is pretty good. https://www.theguardian.com/politics/2020/mar/29/uk-strategy...

What really gets me is that if the lockdown was the correct decision, we arrived there for the wrong reasons.

This paper had such an outsized impact that it should be held to a higher standard. And it's scary (but not really unexpected) that the government is making decisions of this magnitude based on such a shaky foundation.


> beating COVID by creating herd immunity, similarly to Sweden

The big problem here is that herd immunity requires that either you have a vaccine or you get some large fraction of the population infected, over 50%.

The death rate is about 1%, plus further people suffering long-term complications.

So achieving herd immunity in the UK would require about 300,000 dead.


Yeah, requiring years of training to write anything is nonsense. Everybody should learn to write code, and there's tons of interesting stuff you can do without knowing software engineering best-practices. Not every scientific model has to scale to industrial scale or be maintainable by many people over many years.

My problem is entirely with the article that blames the tool they chose and the software engineering community that didn't put big warning stickers on that tool.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: