Unreliable AI
Why we want to build AI as robots whose actions are always right and calculated, but instead we are building them as humans-like where “mood” (temperature) affects more than facts?
Just guessing, but it seems because it’s in our nature. Or maybe that’s more winning scenario of evolution. As more hard task you have, as less specific solution you get. Heisenberg uncertainty principle.
Can only imagine some future software sales contract: our algorithm works in near 70% of requests correctly.