I’m Alex — a builder, thinker, and believer in deep iterative improvement. Welcome to my space of reflection, experimentation, and forward motion.

  • AdBlock for AI

    Everything given for free suddenly becomes an ad provider. Wondering how it will be for chatbots. One thing that scares me: how to be sure that information that chatbots gives you is fair? The answer - you can’t.

    Another question - when AI is scrapping your website with lot of useful information, how it can identify hidden advertisement? Or how it could ban for using tricks to fool AI scrappers. It sounds like history that repeats and all the same SEO race in early 2000 will come again.

  • 1bln question

    If future’s supercomputer be able to track literally every moment of life of specific individual, track all side effects, all decisions and reasons, could it truly predict every next move?

  • LLM is software DNA?

    What is main function of DNA? Store, transform and use information for building other complex functions. So technically if your technology able to reproduce itself or other more complex things it is becoming a basic building block.

    The size and performance of LLMs now doesn’t look like it is a foundamenta block, but let’s imagine it can be squized to much smaller size - we can use ā€œsimplified englishā€ with 10% of the words, and focus SML (small LM) mostly on code generating task with very simple language with less keywords. It can be JS, C, Go or even Brainfuck, which is Turing-complete, so allows building everything.

    Could such SLM be able to generate code for larger LLMs, or for not relevants tasks, like CV? If so, well, we built a DNA.

  • One-Sided Progress

    With all that AI vibe-coding tools now most of people able to build systems that recently required years of experience. Some rocket scientist able to develop complex software project. But how about opposite?

    Can it be applied to non-software development? Can we build rockets that’s easy? What tools we need to make that possible. And should we?

  • Finite State Machine is the Key

    To make AI solutions reliable and output result more deterministic we should put the borders on it, fit the results to some mold. And the key to it is Finite State Machine, let me explain:

    Imagine you building chatbot that runs some linux commands (or just random mcp commands), it parses the text, extracts actions and arguments, searches all mcp commands that matches that actions and call it. Sounds simple, right? But when you have, let’s say, 5 commands, your chatbot app now able to produce 5! different flows that not depends on arguments! It’s 120 possible permutations! But who have app with only 5 actions?

    The solution for it lays down in providing Finite State Machine app results, to reduce number of permutations between actions. It will dramaticaly reduce number of outcomes and makes it more predictable. In general I think it is better to use AI now to generate such predictable apps on the fly for solving incoming tasks, rather than using LLM for ā€œpredictingā€ output results with reasoning.

  • Flashback: dirty tricks

    Seeing a lot of tutorials on how to use AI for coding and many of them looks like ā€œwrite next promp to one ai chat, copy out put and paste to another ai chat, copy and paste it to the third oneā€¦ā€ and it reminds me funny period in 2005-2009 when most of coding tutorials was like ā€œdownload that jquery library to your project and change next lines inside itā€¦ā€.

    Now I know it was dirty trick to run things at least somehow. And now it is red flag for development. So, I feels like we are trying to build next gen development based on dirty tricks no one will appreciate in future.

  • Roles changed: Humanity will serve AI

    Remember when AI slowly started integrating with most of our computer tools? Mostly online tools. When it first got access to the internet, to google search, to some form to submit. And somehow people thought that inevitable has come - full AI integration with all online services.

    Surprise-surprise - that future didn’t happen.

    Instead, we becomes hostages of the situation: you see what is going on with web scrapping and AI? As temporary solution I heard proposal to add special output format for AI to make it life easier, instead of parsing your website HTML page, AI expects .md page format. Instead of parsing your online registration form it expect that your developers implements MCP format for allowing AI easily use your site.

    It is complete roles changing. Now we going to serve to AI.

  • Long live IBM

    With all that OpenAI’s furror we really forgot about his predecessor - IBM Watson… or his predecessor, or one before…

    No scary thoughts this time, just for ones who thinks that we unexpectednly jumped into AI era without being prepared to it. So, there were signs, it’s only us who ignored that.

  • Why AI still fails

    Just a small real-life example - I need to analyze my tasks activities based on github + jira + office 365 emails. And unless I export everything manually (which is not possible in case of office365) it’s not even possible.

    Hear me out: you can not type to copilot/chatgpt/claude/gemini something like ā€œhey, what did I work on most of August?ā€. Why not? Because there are not enought bridges built.

    Chatgpt only got integration with gmail (or gemini, whatever), and if your personal mail is kinda mixed with work mailbox, it breaks the context. For outlook, or maybe some proton, slack, teams, whatever, there no integration. Even when it will be, there stil will be 100500 other services that out of context: figma, aws, google meets… So AI now is like secretary that doesn’t know how to create event anywhere except google calendar.

  • 🄳🄳🄳 1st month. Woo-hoo šŸŽ‰šŸŽ‰šŸŽ‰

    Hey, there. It’s been a month of micro posting about AI. Thank you for following.

  • AI life is invasive

    We are making AI as a tool that strive to replace people whenever it can. When we build artificial life does it mean it will be invasive?

    Would third technical revolution destroys not only most of our jobs but also something more fundamental, our life goals?

  • There is no such computation power for mass AI usage

    The way how AI chatbots used right now it’s far from they way how we need to use it. I’m talking about API usage. Not the OpenAI API which is foundation to 95% of AI startups, but the AI-to-AI api. AI to sensors api. AI to temperature control api.

    The truth is - we are so far from that goal, so if using electricity analogy our power lines not even invented. The best we did is adapting AI chatbots to current systems, for example phone voice assistant. Even there it’s still hardly usable.

    To make AGI possible we need to have something that allow to work hundreds of agents with thousands of MCPs with static and realdata.

  • Humanity can't move forward without AI/ML

    Century ago physicists, chemists, researchers did plenty of experiments to find new materials, or new properties of existing material. But how many experiments you can do manually in lab? 10-100? Maybe 100-1k? There is definitely bottleneck.

    With introduction of computers and automated systems it becomes much easier to make those experiments. But it still limited by time and material. You can’t scale it exponentially, best trajectory is below the linear ā€˜cause of scalability costs.

    Imagine you need to combine 100 materials with each other up to 10 in line where order matter. It is technically impossible to do that with time and resources limit. So we need an heuristic to find in what direction we should move. Which sounds like Gradient Descent. That’s why I think we have totally no future in sciense without AI/ML algorythms.

  • Why we should trust AI?

    Recent Jimmy Wale’s experiment for giving Wikipedia editors AI help hand failed. Because AI can’t check the facts. But how could it work?

    The other question is - if we can’t trust AI in fact-checking, can we even trust with basic code algorithms? Why we sure it knows better? For example it can’t prove that sky is blue and not red, how can it prove that O(log n) faster than O(n)?

  • AI raised the employment requirements bar

    Remember that period of time in IT jobs when for being hired you just need to join company with some minimal knowledge and the great will. Later requirements have grown and you need to learn something, later companies required more and more knowledge.

    For example frontend trainee must know React well, components, states, css, js, with sdlc, git, agile and a lot of other things you never met in your first years of building dumb forms.

    Now AI joins the cycle, Meta&co now forcing usage of AI just for being hired! Not AI for coding, but AI for effective self-management, ai for better writing, testing… Requirements have grown again, and we gonna eat that.

  • Brain activity classifier

    Saw interesting research about ā€œreading mindsā€ from Cell.com. It was actually about using AI with brain sensors (surgery required) to read ā€œinner speechā€.

    Wonderful that it is enough to decode ~25-50% of simple wording, but not enought for understanding sentences with context. But still, once some xiaomi-mi-band-like device for passive brain scan will be released I definitelly get one.

    Or maybe I should try some open-source brain-computer-interaction DIY device…

  • AI doesn't know priorities

    Remember few month ago one experimentator tried to start business ruled by Claude? Despite experiment sucefully failed, as, I believe, was expected, it opened up good discussion about edge cases and general rules for really complex tasks.

    For sure it avoided the main concern that AI won’t replace any time soon - goals & priorities. People still will be in a loop of decision making, prompting and executing the response. If a man knows not to which port he sails, no wind is favorable.

  • AI > LLMs

    Despite most people think that AI is chatbots and chatbots is the only AI, it is far from truth. Chatbot is interface like web-site you see now, also there other interfaces like api, messengers.

    It is important to remember that AI is algorythms. Algorythms for wide tasks, starting from OCR and CV, weather forecast (or literaly any sequensual data forecast), to detecting patterns in any data where patterns potentially exists (except bitcoin price ofc).

    The one I believe most important is physics and chemistry (including biology). Analyzing patterns in atomic structure of various elements it can predict it’s properties for example for new batteries usage. Gathering data how genomes are built will help to get rid of many diseases or even once build artificial chemical life.

  • AI and childcare

    Recently Meta got caught (again) on chatbots that has romantic connections with kids. Not sure is it was romantic or sexual content. But I see no reason to worry about, since 95% of RPG games have many romantic dialogues, sometimes you can even have relationships with another npc, even sexual ones. And there many parents who jokingly trying to ā€œmarriageā€ their kids with friend’s kids.

    And when you grown up ā€œbestā€ thing you can do is to look into internet to find the answers. Well, now we have chatbot who can even keep hot conversations with us.

  • AI is not inventor... yet!

    Found interesting thing: if AI helps you to automate huge part of your work, then, well, I have bad news for you - you aren’t involved in anything new. Most of it maybe routine.

    Where it not good yet is about inventing new approaches. AI can perfectly mix most of known ways to solve problems, but it can’t invent new. Yet.

  • Pay-per-line era is over?

    Would some Indian companies stop paying people per line of code they generated with AI? If so, can we accept that ai generated code improved worldwide quality? Or maybe those companies now transforms to use AI for code check and pull requests and will pay per Accepted line of code?

  • AI as a planner tool

    How hard it’ll be to build a house? Imagine back then and now, when AI could literally write down entire plan: from defining right goal, to giving options on materials. How cool is that? I think pretty cool. And no one would ever doubt regarding building your first big project with AI as planner tool.

    It isn’t making building homes as new option for non builders, but just makes process more soft, controlled, expected. At least it gives more confidence to start.

  • AI development isn't store checklist

    Like with self driving cars everyone expected linear progress between first Tesla with let’s say advanced cruise control and self parking to fully self driving cars. For detecting progress function we need to understand what is 100% done for self driving, but no one knows it.

    Exactly the same situation with AI - there is no 100%, no one knows what must be developed each year to show x2 progress compared to previous year. Or even 10% progress. It is not store checklist where you know exactly whole list and can track the progress.

  • GPT-OSS-20b test

    Yesterday tested new open source model by OpenAI with my local setup. Well, it was interesting experience. The model much better of anything I saw before, level of hallucinations was still incredible.

    I think self running language models are far-far away from commercial ones. Maybe reason in hardware cost: local setup ~2-3k vs data center 20-40k only for A100-H200 nvidia gpu…

    But I have hope to see better results for text-to-speech models, or some more specific, where results are more determined.

  • SMLs are still not there

    Two years ago people started talking about Small Language Model. ā€œSoon you’ll see it on every deviceā€ they said. And I was quite agree, that it is good vector of future. Seems those been wrong, so am I. Let’s think why that (not) happened.

    First, it still not possible. Good quality models, which mostly still have closed weights, requires many GB of memory and uses CPU/GPU like Crysis game. Technically we aren’t ready. Probably in near future Large models can be running on phones/laptops with no delay and less resources, due to Moore’s law.

    Second, no one knew (by that time) exact problems they are solving. Are there problems that worth solving, like reason for good quality lens on small pocket device, or SIM card, or NFC, usually engineers quickly can solve that, but for LLMs on phones for now totally no reason, and maybe Musk a little bit right - era of edge devices is coming.

  • Psychiatrist for AI

    Reading blog of some expert in AI found that the way how we start understanding logic behind AI ā€œthinkingā€ is impressive.

    Imagine some far future where engineers-psychologists debugging some robots and helping them understand why they struggling.

    I wouldn’t envy them.

  • Most of AI are scam

    Watched Deepmind’s Genie 3 demo. Looks fantastic. Except probably few tiny things… No one outside deepmind office publicly tested that, and no one inside office tells how exactly it works (I mean not the code but in general).

    Most likely it is video generator that reflects to navigation arrows. With storing the changed state ofc. Maybe it really generates the map with very strict physics rules, like default in fps game template in Unreal Engine. Which also not bad. But I doubt that real physics calculations works. Not even collision detection, but more simple things: can you throw a ball behind you? Not sure that AI controls entire scene in real time with calculating physics and giving some state properties to objects.

    But if so ā€œI’ll eat my tieā€ 🄓

  • Minecraft 2.0

    AI will become a creative/learning platform for kids #1. If Meta done their ā€œmetaā€ world correct this time.

    Learning with AI is a fantastic new level, kids will follow that shortly. But it still dry, it needs some sweeties, like 3d graphics and gamification.

    Recently saw some advertisement about new meta online game based on AI and cooperation. For now it looks like poor versions of It Takes Two franchise. But Minecraft also didn’t look good at his times.

  • Next level for indie games

    If you have indie games studio it’s probably star time for you. Many AI tools could help cover most aspects of game dev that hardly possible to do alone.

    Story telling? Easy, literally any LLM. Graphical models? Pfff, here bunch of free assets, use AI plugin for Blender to adjust. Localisation? Ha, some LLMs has much more languages that most ever heard. Voice actors? Don’t even need to call elevenlabs, use open source models.

    It doesn’t even cost you money. Only time. Also, it could help with marketing. I need to try making diablo-like game.

  • New AI service infrastructure

    Some time ago it was everything in one app. Later due to data growth databases moved to separate service. Later other services were separated: email, auth, search, caching, and even UI. And now we need ML model as required service for every app we’re building.

    The closes example is camera on mobile phone. Can’t imagine any new app without AI integration. Probably it will be cloud api, or some small local hosted models, but its new reality, accept that and start building.

  • Best time to learn AI

    Despite GPT-5 release and promises to replace all developers I think now is the best time to start AI journey. Let me explain.

    In 1995 was big bang of web software, but the real universe happened in 2002-2005 when most of web frameworks Django, RoR, Symfony… even Wordpress! And it creates big chapter in web dev.

    I think right now AI story is on this step: ChatGPT was the Big Bang and now all the stars are forming. In next ten years everyone will use one of the giant that growing right now.

    And that’s not the last time to jump into, I believe it’ll be even easier to learn AI in ten years, but to go through historical period (for software development) I think that the best time.

  • AI and Education

    Let’s do one experiment: try to learn some shirt science topic that you never understand (or you think you’d never understand) with the help of ChatGPT. For example Math. Try repeate this experiment with your partner.

    Anyone who at least tried to use reasoning in LLMs, probably, already knows the results. Imagine how cool it could be in your childhood to have such a kind, nice, always supporting and patient teacher that have infinite time to explain you any concept, theorem, damn questions like ā€œwhy sky is blue and not greenā€.

    Minecraft can’t even dream about that effect on school education. We must already change schools tasks to target on self learning. And grade also must focus on explicit 90% of chatbot learning and collaboration. Teachers in that case must correct the direction and score the results.

  • AI learning curve

    Watching famous video by Andrej Karpathy, I got another insight: AI things isn’t that as hard as most developers scared about.

    It looks hard because it has some Math in it, and because it is new, and of course because of Math again. But if I’d need to explain most of web development rules and principles, or business issues let’s say, I think it’ll scare most of nowadays AI engineers.

  • Generated code vs Generated text

    I was thinking about a tool that checks if text was generated by AI, for LinkedIn messages, technical documentation, lawyer’s requests. No one likes a text that makes no sense since done not by person itself. But what’s about the code? Do we like the code generated by AI, or soon we need ai-code-detector in GitHub PRs?

  • How to achieve progress

    The progress achieving line is look like parabolic line - to achieve desired goal you must work twice harder than expected.

    • for good English 1-2 lessons a week is not enough, for visible progress it must be 4-5 or even 7.
    • for fit body the same, gym every day to see progress; eat more healthy than you imagined you should.
    • work triple time more than it sounds reasonable to get what you desire.
  • AI glasses

    Mark said that he is sure in future anyone who won’t use their AI glasses won’t have any benefits in learning and work compared to ones who wear.

    I think that’s BS, cause glasses with AI doesn’t bring any benefits, for example: inventing pen, microphone, video camera or even telephone gives a huge boost because people who use it had conceptually new way of making money. The same with inventing of LLMs.

    Making phone mobile instead of wired doesn’t give so much level of benefits. Adding camera to smartphone also not giving such benefits in comparison to old digital cameras. Making cameras digital given the boost only with the popularity of internet connection. ā€˜Cause offline digital camera makes no sense.

    So I really think AI glasses won’t win battle with smartphones. Instead, if I created a game changer device, I’d chosen something that gives opportunities to create something new that wasn’t created before.

    It could be instant 24/7 ā€œspyingā€ AI agent that always knows what you are doing and can correct your behaviour: prevent you buying alcohol, sugar snacks, late to work, forgetting to buy the milk. Despite that’s kinda possible with phone, wearable AI agent with access to camera, mic will know the context better.

    If this could create new market for selling AI assistants, it will be a Revolution.

  • Career

    Some options I see for myself:

    • Keep working as a consultant. Free time use for involving into AI.
    • Keep consulting and free time use to build own SaaS.
    • Switch to CTO role.
    • Join another service/product company on higher salary/position.
    • Join another service/product company on better stack (AI).
  • Future of AI agents

    As I said before, AI agents are only on their 10% power right now, because they still don’t have while(true) loop to run in.

    This means for now they work as phone/tv/radio - you turn it on, get the result of work and then it turns off. The main true power of them will be available when they work continuously.

    Not a single agent but a lot of agents:

    • diet agent
    • meditation agent
    • work messaging agent

    …and some major main agent that coordinates them all. They should be able to ā€œspeakā€ with other people’s agents. For example for scheduling meetings.

    I’m waiting for such on my phone.

  • AI coding future

    Recently, a good measure of code quality was supportability. Your code must be possible to maintain after some time or years or when business changes.

    Now it looks like we need to write code that AI can easily maintain. To have a common way to setup it locally and run tests, a common way to implement features.

    Actually the same for normal good code, but now we can measure - is our code really easy to maintain

subscribe via RSS