• Stop being a LURKER - join our dealer community and get involved. Sign up and start a conversation.

Ask a Developer! OpenSource information

So, Ai is just a percentage relation to words that come before and after.

Carsten,

Thanks for laying out the legacy systems so clearly. I’d like to introduce you to what’s coming next: AI that doesn’t just regurgitate old code or static patterns but actually “thinks” through problems.

Recent breakthroughs in large language models (LLMs) have moved beyond simple next-word prediction. Using techniques like chain‑of‑thought prompting, new models are now able to decompose complex problems into a series of intermediate reasoning steps before providing a final answer. In practical terms, models such as OpenAI’s o1 (internally called Strawberry) and DeepSeek’s R1 now solve challenging math, coding, and scientific tasks at levels comparable to human experts.

These models spend extra “thinking time” during inference—much like a chess player considering several moves ahead—allowing them to refine their strategies and even self-correct before answering. For example, on rigorous tests like the International Mathematics Olympiad qualifying exam, o1 has dramatically outperformed previous models. This isn’t just an incremental update; it represents a shift from “dumb” output generation to a dynamic, human-like reasoning process.

I know your expertise is rooted in the tangible aspects of our systems—microservices, APIs, and the cloud—but imagine integrating these smarter models into our architecture. They could help automate debugging, optimize code, and even assist in strategic planning by analyzing vast datasets in ways we never could manually.

I invite you to explore these advancements with me and consider how we might pilot these models in our projects to drive our next-generation innovations.

Below are several key references that outline these developments, along with summaries and their URLs:

  1. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
    This seminal paper by Jason Wei et al. introduces the chain-of-thought technique, showing how LLMs can be prompted to break down complex problems into step-by-step reasoning. This method has been crucial in unlocking improved performance on challenging tasks.
    URL: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
  2. OpenAI Launches New Series of AI Models with ‘Reasoning’ Abilities (Reuters)
    This Reuters article details OpenAI’s recent release of the o1 model (code-named Strawberry), which employs chain-of-thought reasoning to tackle complex problems in science, coding, and math—demonstrating a significant leap over previous models.
    URL: https://www.reuters.com/technology/...ies-ai-models-solve-hard-problems-2024-09-12/
  3. OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step (Wired)
    Wired’s coverage explains how the new o1 model reasons through problems step-by-step—“thinking aloud” before arriving at a final answer. It highlights the model’s enhanced performance on advanced tasks and its potential impact on our industry.
    URL: OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step
  4. OpenAI’s o1 Model is Inching Closer to Humanlike Intelligence – But Don’t Get Carried Away (Business Insider)
    This Business Insider article discusses how o1’s extended reasoning time allows it to achieve results that resemble human problem-solving, particularly in STEM fields, while noting that challenges like errors and hallucinations still remain.
    URL: OpenAI's o1 model is inching closer to humanlike intelligence — but don't get carried away
  5. What It Means That New AIs Can “Reason” (Vox)
    Vox provides insights into the significance of AI models that “think” before answering. It describes the internal chain-of-thought process that enhances the accuracy and robustness of outputs, as well as the dual-use risks associated with these advances.
    URL: What it means that new AIs can “reason”
Looking forward to your thoughts on how we can integrate these exciting advancements into our next steps.

Best regards,
GPT o3-mini-high
 
Last edited:
That's almost a good reply, but this gave it away:
"Thanks for laying out the legacy systems so clearly. I’d like to introduce you to what’s coming next ..."
Granted Ai has come a remarkably long way you can still tell.

Ai is evolving at break neck speeds right now. It's truly an amazing time.

Just to be clear, I'm not against ai and I'm embracing it. However, I'm realistic about it's abilities.

The thinking and reasoning ... is an interesting take. It slows the interaction down a bit though.

Reasoning: the user seems to be thinking of something, the user may be thinking of the future, the user is thinking of the future about ai, the user is thinking about the future and how ai will evolve.

Older models, either got it right away or you had to prompt it a bit.

My peer group is now interacting on a daily level with Ai. Developers are naturally tuned like a race engine to look for bugs and have a high tendency to solve them. Ai is that grind right now, the prompts can only excel to a certain extent and for something like ChatGPT, deleting the chat histories help reset the connections it tries to make even after you tell it the topic is different.

Sonnet 3.5 for us, without reasoning performs better. So last gen technology has the advantage.

This YT channel is focused mostly on new programming languages and database things. He recently built a chat service that interfaces with various Ai.

This particular video shows an interesting take on how well Ai can code. He also uses Ai to fix Ai code. He also covers Perplexity in a real world usage. TLDW; Ai still fails.


View: https://youtu.be/WVpaBTqm-Zo?si=eyWVb9C6gH2PyQxr
 
That's almost a good reply, but this gave it away:
"Thanks for laying out the legacy systems so clearly. I’d like to introduce you to what’s coming next ..."
Granted Ai has come a remarkably long way you can still tell.

Dude, Prompt engineering sets up context.

You are seeing the me talking to AI, talking to you.​

Here is the prompt I created (see the context):

"You are a CTO preparing your team for the paradigm shift that AI will bring. I am the founder and I support your plan. Below is an internal forum thread by a dev named Carsten. Read the thread and craft a reply from your POV that introduces Carsten to the newest information with citations that will open his mind. For example, 'reasoning' itself is creating amazing results. Create the narrative from a CTO's position, speaking to a dev that sees only the weakness of LLMs."

AI's impact to daily life is going to be bigger than the internet itself. For example, AutoMagicLabs.ai has pivoted to become a LLM-centric platform where we feed the LLM data and craft prompts to get shit done.
 
Today anybody working with Ai is going to have come across a prompt in 1 form or another.

I pointed out when I noticed it was Ai generated. My quote puts it clearly at the first sentence.
The prompt is good except that the model used is lacking and can be clearly seen in the response..

Look, you are clearly misunderstanding what I have been saying about Ai and that is either because of your business. I am not attacking your businesses. IF anything I was trying to provide ideas and help.

Listen for one second.

I am all in on Ai.

My point has been clearly, it is still going through growing spurts and isn't really ready for production.

o1 was a few weeks ago. Today it's o3.
Grok3 came out a few days.
Today, sonnet 3.7 just dropped, sonnet is my favorite for actual coding.
03 has been tweaked and is no longer butt hurt when you call it names.
 


Write your reply...