Here is a more detailed elaboration.
- Make it work.
- Make it right (the code is readable [uses intention revealing names] and every idea is expressed once and only once).
- Make everything work.
- Make everything right.
- Use the system and find performance bottlenecks.
- Use a profiler in those bottlenecks to determine what needs to be optimized. See Profile Before Optimizing.
- Make it fast. You maintained unit tests, right? Then you can refactor the code mercilessly in order to improve the performance.

no subject
Date: 2025-04-18 22:22 (UTC)no subject
Date: 2025-04-18 23:02 (UTC)Многие приложения короткоживущие. Используются несколько раз и перестают быть нужными.
no subject
Date: 2025-04-19 10:26 (UTC)no subject
Date: 2025-04-18 23:39 (UTC)Nowadays I seriously consider "once and only once" to be an antipattern because applying it in practice usually reduces redundancy at the cost of increasing dependencies, which is not a good tradeoff in my book.
no subject
Date: 2025-04-19 00:40 (UTC)no subject
Date: 2025-04-19 05:55 (UTC)no subject
Date: 2025-04-19 11:40 (UTC)Every problem can be solved by adding another abstraction level except the problem of having too many abstraction levels.
no subject
Date: 2025-04-19 14:39 (UTC)no subject
Date: 2025-04-19 18:12 (UTC)Industry practice is to pile abstractions on top, though. Instead of caveman copy-modify, create recursive currying singleton token observer accumulating wrapper factory.
no subject
Date: 2025-04-19 19:01 (UTC)no subject
Date: 2025-04-19 19:05 (UTC)Which industry you are talking about? Companies like Oracle? Or some other Indian Enterprise?
no subject
Date: 2025-04-19 20:09 (UTC)I've seen it play out in several industries — financial services, medical diagnostics, payment processing, etc.
Even in embedded programming as soon as controllers became sufficiently powerful, people started piling abstraction levels on top of each other, and soon any change required unwrapping and often drilling holes through those layers.
no subject
Date: 2025-04-20 06:13 (UTC)no subject
Date: 2025-04-20 11:45 (UTC)I considered FORTH for that use and decided against it. C with inline assembly resulted in code that ran faster on the hardware that we had at the time, but more importantly the standard library was way better.
no subject
Date: 2025-04-20 12:38 (UTC)Of course, in small solutions, like `strlen`, it does not matter in small functions (saying it as I'm the one who made Borland's `strlen` about 4 times faster.)
no subject
Date: 2025-04-20 13:24 (UTC)You're arguing with a stopwatch. There was an interrupt triggered by hardware every 20 milliseconds. Interrupt handler then read accumulated data from the sensor, processed it, and updated display buffer directly. C code could do it in time, after I hand optimised it to use string operations that were supported by the CPU (but not the compiler). FORTH was not even close.
And it was much easier to write the rest of the logic (polling the keyboard, user interface, calculating necessary metrics) in plain C, using the expansive C standard library.
no subject
Date: 2025-04-20 15:08 (UTC)- you were. more comfortable with C than with Forth
- your Forth was not good enough to write fast code
- you didn't have enough experience with Forth to write ui and calculations in Forth.
Ok, but it's not the problem with Forth. Once we two, me and my colleague, had snapped together a drilling site simulator (including ui, sensors interactions, formulas interpretation (formulas entered by the user). We spent one week on it. Before that we spent three weeks writing a dedicated Forth system and drivers, all in Forth. And of course our code was pretty fast.
But again, we already had over a year of Forth experience, with our one Forth core, and with an experience of its industrial use (in drilling control).
What I want to say, that it's not about Forth at all, it's about one's tastes and experiences. Those days I could use C, and I've always respected C.
no subject
Date: 2025-04-20 20:22 (UTC)My FORTH was as bad as my C or my assembler, but it does not really matter because the operation in question was so simple. There isn't that many ways to copy from one memory area to another. Whichever language you use, if your compiler does not convert the loop to string opcode and does not give you ability to do so, it's not going to be as fast. Back then there was no speculative execution so I could look up the number of clock ticks it would take from the opcode table, and even see read operations with oscilloscope to confirm. And it wasn't a small difference, it was the difference between a device that worked and device that didn't.
no subject
Date: 2025-04-21 06:07 (UTC)no subject
Date: 2025-04-19 05:54 (UTC)As to the names, at times one has to use backticks::
val `Set^𝟛`: CategoryOfDiagrams = toposOver(`𝟛`)But, otoh, a refactoring can break thing seriously, and if the refactoring is ideologically correct... it does take efforts to fix things.