The author clearly doesn’t realize that they still mock in their examples. I understand the annoyance with mocking away the complexity, however.
To address your second claim - doing IO in tests does not mean testing IO.
I test my file interactions by creating a set of temporary directories and files, invoking my code, and checking for outcomes. That way I can write my expectation before my implementation. This doesn’t test IO, merely utilizes it. The structure in temp that I create is still a mock of an expected work target.
Very similarly I recently used a web server running in another thread to define expectations of API client’s behavior when dealing with a very ban-happy API. That web server is a mock that allowed me to clearly define expectations of rate limiting, ssl enforcement (it is a responsibility of an API client to initialize network client correctly), concurrency control during OAuth refreshes etc., without mocking away complexities of a network. Even better, due to mocking like that I was able to tinker with my network library choice without changing a single test.
Mocks in the general sense that author defined them are inevitable if we write software in good faith - they express our understanding and expectation of a contract. Good mocks make as few claims as possible, however. A networking mock should sit in the network, for example, lest it makes implied claims about the network transport itself.
Teeth cannot produce enamel. Enamel is not a living tissue and it was produced by cells outside of the tooth in a coral-like manner. In order to grow a new tooth, you need it to be fully surrounded by specialized living tissue for the whole growth cycle.
PS: I honestly expected something like this to come out of bioelectric computation research, but progress seems slower there. Or rather knowledge and techniques in other fields is reaching critical mass, giving us these advances.
I could write a long tirade on the terrifying flaws of this logic, but instead I’ll just share a reminder that barely anyone is the villain of their own story.
“90% accurate” is a non-statement. It’s like you haven’t even watched the video you respond to. Also, where the hell did you pull that number from?
How specific is it and how sensitive is it is what matters. And if Mirai in https://www.science.org/doi/10.1126/scitranslmed.aba4373 is the same model that the tweet mentions, then neither its specificity nor sensitivity reach 90%. And considering that the image in the tweet is trackable to a publication in the same year (https://news.mit.edu/2021/robust-artificial-intelligence-tools-predict-future-cancer-0128), I’m fairly sure that it’s the same Mirai.