Tags

    News

    Onboarding Best Practices
    Good Guy = Bad Manager :: Bad Guy = Good Manager. Is it a Myth?
    Five Interview Tips for Winning Your First $100K+ Job
    Base Pay Increases Remain Steady in 2007, Mercer Survey Finds
    Online Overload: The Perfect Candidates Are Out There - If You Can Find Them
    Cartus Global Survey Shows Trend to Shorter-Term International Relocation Assignments
    New Survey Indicates Majority Plan to Postpone Retirement
    What do You Mean My Company’s A Stepping Stone?
    Rewards, Vacation and Perks Are Passé; Canadians Care Most About Cash
    Do’s and Don’ts of Offshoring
     
    Error: No such template "/hrDesign/network_profileHeader"!
    Forward Blog
    Name
    GenAI Explains Itself, We Explain "Chain of Thought" (and set it to music)
    Chain Chain Chain of Thought: the benefits when AI “shows its work” When we made the editorial decision to make CoT (Chain of Thought) our focus, I immediately thought of Aretha and Gilbert Arenas. Aretha for obvious reasons - I spent a little time thinking of how to gracefully, wittily work it in, [...]


    GenAI Explains Itself, We Explain "Chain of Thought" (and set it to music)



    Chain Chain Chain of Thought: the benefits when AI “shows its work”

    When we made the editorial decision to make CoT (Chain of Thought) our focus, I immediately thought of Aretha and Gilbert Arenas. Aretha for obvious reasons - I spent a little time thinking of how to gracefully, wittily work it in, and, admittedly, took the easy way out by just posting a link to the track. Why former NBA star turned popular podcaster Gilbert Arenas sprung to mind requires a bit more of an explanation.

    On his podcast, he made a hilariously convoluted argument for why a series of objectively dumb, self-destructive acts were strokes of intuitive business genius, as they put him in an unexpectedly strong position to land - and get - a huge contract thanks to a quirk in the NBA’s salary structure at the time: “when you add all that dumbness in…it comes out smart. It's like Kanye West, somehow it just gets to genius.”

    Even to AI developers, with intimate knowledge of how the models are designed, trained, and deployed, the process that produces a result can seem as mysterious to them as it does to us, like a silver ball winding its way through a pachinko machine. Or, per Gilbert, a whole lot of dumbness - what may be more properly called algorithmic randomness - that comes out smart. What happens in the black box stays in the black box.

    Chain of Thought(CoT) throws a light into the black box. CoT is a method that enhances the reasoning abilities of AI, allowing it to break down problems and provide more accurate, step-by-step answers. The other week, OpenAI released a new model called o1 (previously referred to by its code name “Strawberry”). According to a blog post on the OpenAI site, the new o1 version “learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working.” It does this using CoT.

    “We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning. o1 thinks before it answers—it can produce a long internal chain of thought before responding to the user.”

    In earlier versions of AI models like ChatGPT, responses were typically generated almost instantaneously. These models excelled at quickly producing fluent and coherent answers, but they often relied on pattern recognition and surface-level understanding. They were effective for tasks requiring straightforward, well-known answers, but struggled with more complex reasoning. The approach was akin to "jumping" to a conclusion based on available data, prioritizing speed and efficiency over deep problem-solving . Today’s column gets into the challenges it solves, how it works, how it improves AI outcomes, potential drawbacks, and applications for HR.

    The Challenge Chain of Thought Solves
    Despite the power of LLMs (large language models), they have limitations. While they are good at generating fluent text and making basic inferences, they often struggle with complex reasoning tasks. For example:
    • Multi-step math problems: Where several steps need to be completed in the correct sequence.
    • Logical reasoning: Where deductions must be made from a series of conditions or rules.
    • Ambiguous tasks: Where multiple factors must be considered to arrive at a clear solution.

    Without structured thinking, AI might:
    • Provide incomplete answers to questions requiring multiple steps.
    • Skip important reasoning steps, giving conclusions without explanation.
    • Be less reliable for tasks involving logic, arithmetic, or long-term dependencies.

    Chain of Thought solves this problem by forcing models to break down their reasoning process, leading to clearer, more accurate results.

    How Chain of Thought Works: A Simple Explanation
    At its core, Chain of Thought prompting is about teaching AI to "show its work" rather than jumping directly to an answer. It involves generating intermediate reasoning steps before arriving at a final conclusion. This process mirrors how humans tackle complex problems — by breaking them into smaller, manageable steps and solving them one at a time.
    This is from the highly influential 2022 paper, Chain-of-Thought Prompting Elicits Reasoning in Large Language Models which “explored chain-of-thought prompting as a simple and broadly applicable method for enhancing reasoning in language models.” The following is a screenshot taken from their paper offering a clear and concise contrast of the two prompting methods:

    image-20241011124812-1
    By “thinking aloud” and showing its intermediate steps, the model is more likely to get the problem right, ensuring transparency and correctness.

    How Chain of Thought Improves Outcomes
    • Breakdown of Complex Problems:
      • By breaking down problems into a logical series of steps, the model handles tasks that require multi-step reasoning much better. This is particularly important for solving math problems, answering logic puzzles, or tackling any tasks that involve cause-and-effect reasoning.
    • Increased Accuracy:
      • When the AI generates a series of thought-out steps, it is less prone to make oversights. This process not only helps in minimizing mistakes but also in clarifying ambiguous queries.
    • Transparency in Reasoning:
      • The step-by-step approach provides transparency. Users can see the model's reasoning, allowing them to understand how it arrived at a particular conclusion. If the AI makes an error, users can spot where it went wrong.
    • Versatility Across Tasks:
      • Chain of Thought has applications beyond math and logic. It’s useful in question answeringcreative problem-solving, and even in tasks that require common-sense reasoning. Whenever a problem needs intermediate steps to arrive at a solution, CoT becomes invaluable.
    • Handling Ambiguity:
      • When faced with complex tasks where the path to the answer isn’t straightforward, CoT helps the AI navigate by breaking down the ambiguity and resolving it systematically.

    CoT brings transparency, consistency, and clarity to HR processes:
    • Increased Transparency makes HR processes like promotions and reviews easier to understand - and trust.
    • Better Integration of Multi-Step HR Tasks for efficient management of onboarding, training, compliance checks, etc.
    • Improved Employee Experience by providing clear, thoughtful explanations for decisions that employees are more likely to trust HR. When they see that decisions are based on logical, well-reasoned steps, which can translate into greater employee engagement, reduced turnover, and a positive workplace culture.

    In the Weeds
    This gets a bit deeper than you probably need or want to go, but there are other variations and approaches to Chain-of-Thought (CoT) reasoning in AI models. These methods can be tailored to different tasks, contexts, or types of guidance provided. The two you may have heard of are called “zero-shot” and “few-shot” CoT.
    • Zero-Shot Chain-of-Thought: The model solves a problem without any examples, generating step-by-step reasoning from scratch based solely on the prompt. It breaks down the problem independently, with no prior context or examples.
    • Few-Shot Chain-of-Thought: The model is shown a few examples of similar tasks with reasoning steps before solving a new problem. It uses these examples to guide its reasoning for the new task, improving performance by mimicking the provided examples.

    For More on CoT:


    😀😁😂😃😄😅😆😇😈😉😊😋😌😍😎😏😐😑😒😓😔😕😖😗😘😙😚😛😜😝😞😟😠😡😢😣😤😥😦😧😨😩😪😫😬😭😮😯😰😱😲😳😴😵😶😷😸😹😺😻😼😽😾😿🙀🙁🙂🙃🙄🙅🙆🙇🙈🙉🙊🙋🙌🙍🙎🙏🤐🤑🤒🤓🤔🤕🤖🤗🤘🤙🤚🤛🤜🤝🤞🤟🤠🤡🤢🤣🤤🤥🤦🤧🤨🤩🤪🤫🤬🤭🤮🤯🤰🤱🤲🤳🤴🤵🤶🤷🤸🤹🤺🤻🤼🤽🤾🤿🥀🥁🥂🥃🥄🥅🥇🥈🥉🥊🥋🥌🥍🥎🥏
    🥐🥑🥒🥓🥔🥕🥖🥗🥘🥙🥚🥛🥜🥝🥞🥟🥠🥡🥢🥣🥤🥥🥦🥧🥨🥩🥪🥫🥬🥭🥮🥯🥰🥱🥲🥳🥴🥵🥶🥷🥸🥺🥻🥼🥽🥾🥿🦀🦁🦂🦃🦄🦅🦆🦇🦈🦉🦊🦋🦌🦍🦎🦏🦐🦑🦒🦓🦔🦕🦖🦗🦘🦙🦚🦛🦜🦝🦞🦟🦠🦡🦢🦣🦤🦥🦦🦧🦨🦩🦪🦫🦬🦭🦮🦯🦰🦱🦲🦳🦴🦵🦶🦷🦸🦹🦺🦻🦼🦽🦾🦿🧀🧁🧂🧃🧄🧅🧆🧇🧈🧉🧊🧋🧍🧎🧏🧐🧑🧒🧓🧔🧕🧖🧗🧘🧙🧚🧛🧜🧝🧞🧟🧠🧡🧢🧣🧤🧥🧦
    🌀🌁🌂🌃🌄🌅🌆🌇🌈🌉🌊🌋🌌🌍🌎🌏🌐🌑🌒🌓🌔🌕🌖🌗🌘🌙🌚🌛🌜🌝🌞🌟🌠🌡🌢🌣🌤🌥🌦🌧🌨🌩🌪🌫🌬🌭🌮🌯🌰🌱🌲🌳🌴🌵🌶🌷🌸🌹🌺🌻🌼🌽🌾🌿🍀🍁🍂🍃🍄🍅🍆🍇🍈🍉🍊🍋🍌🍍🍎🍏🍐🍑🍒🍓🍔🍕🍖🍗🍘🍙🍚🍛🍜🍝🍞🍟🍠🍡🍢🍣🍤🍥🍦🍧🍨🍩🍪🍫🍬🍭🍮🍯🍰🍱🍲🍳🍴🍵🍶🍷🍸🍹🍺🍻🍼🍽🍾🍿🎀🎁🎂🎃🎄🎅🎆🎇🎈🎉🎊🎋🎌🎍🎎🎏🎐🎑
    🎒🎓🎔🎕🎖🎗🎘🎙🎚🎛🎜🎝🎞🎟🎠🎡🎢🎣🎤🎥🎦🎧🎨🎩🎪🎫🎬🎭🎮🎯🎰🎱🎲🎳🎴🎵🎶🎷🎸🎹🎺🎻🎼🎽🎾🎿🏀🏁🏂🏃🏄🏅🏆🏇🏈🏉🏊🏋🏌🏍🏎🏏🏐🏑🏒🏓🏔🏕🏖🏗🏘🏙🏚🏛🏜🏝🏞🏟🏠🏡🏢🏣🏤🏥🏦🏧🏨🏩🏪🏫🏬🏭🏮🏯🏰🏱🏲🏳🏴🏵🏶🏷🏸🏹🏺🏻🏼🏽🏾🏿🐀🐁🐂🐃🐄🐅🐆🐇🐈🐉🐊🐋🐌🐍🐎🐏🐐🐑🐒🐓🐔🐕🐖🐗🐘🐙🐚🐛🐜🐝🐞🐟🐠🐡🐢🐣🐤🐥🐦🐧🐨🐩🐪🐫🐬🐭🐮🐯🐰🐱🐲🐳🐴🐵🐶🐷🐸🐹🐺🐻🐼🐽🐾🐿👀👁👂👃👄👅👆👇👈👉👊👋👌👍👎👏👐👑👒👓👔👕👖👗👘👙👚👛👜👝👞👟👠👡👢👣👤👥👦👧👨👩👪👫👬👭👮👯👰👱👲👳👴👵👶👷👸👹👺👻👼👽👾👿💀💁💂💃💄💅💆💇💈💉💊💋💌💍💎💏💐💑💒💓💔💕💖💗💘💙💚💛💜💝💞💟💠💡💢💣💤💥💦💧💨💩💪💫💬💭💮💯💰💱💲💳💴💵💶💷💸💹💺💻💼💽💾💿📀📁📂📃📄📅📆📇📈📉📊📋📌📍📎📏📐📑📒📓📔📕📖📗📘📙📚📛📜📝📞📟📠📡📢📣📤📥📦📧📨📩📪📫📬📭📮📯📰📱📲📳📴📵📶📷📸📹📺📻📼📽📾📿🔀🔁🔂🔃🔄🔅🔆🔇🔈🔉🔊🔋🔌🔍🔎🔏🔐🔑🔒🔓🔔🔕🔖🔗🔘🔙🔚🔛🔜🔝🔞🔟🔠🔡🔢🔣🔤🔥🔦🔧🔨🔩🔪🔫🔬🔭🔮🔯🔰🔱🔲🔳🔴🔵🔶🔷🔸🔹🔺🔻🔼🔽🔾🔿🕀🕁🕂🕃🕄🕅🕆🕇🕈🕉🕊🕋🕌🕍🕎🕐🕑🕒🕓🕔🕕🕖🕗🕘🕙🕚🕛🕜🕝🕞🕟🕠🕡🕢🕣🕤🕥🕦🕧🕨🕩🕪🕫🕬🕭🕮🕯🕰🕱🕲🕳🕴🕵🕶🕷🕸🕹🕺🕻🕼🕽🕾🕿🖀🖁🖂🖃🖄🖅🖆🖇🖈🖉🖊🖋🖌🖍🖎🖏🖐🖑🖒🖓🖔🖕🖖🖗🖘🖙🖚🖛🖜🖝🖞🖟🖠🖡🖢🖣🖤🖥🖦🖧🖨🖩🖪🖫🖬🖭🖮🖯🖰🖱🖲🖳🖴🖵🖶🖷🖸🖹🖺🖻🖼🖽🖾🖿🗀🗁🗂🗃🗄🗅🗆🗇🗈🗉🗊🗋🗌🗍🗎🗏🗐🗑🗒🗓🗔🗕🗖🗗🗘🗙🗚🗛🗜🗝🗞🗟🗠🗡🗢🗣🗤🗥🗦🗧🗨🗩🗪🗫🗬🗭🗮🗯🗰🗱🗲🗳🗴🗵🗶🗷🗸🗹🗺🗻🗼🗽🗾🗿
    🚀🚁🚂🚃🚄🚅🚆🚇🚈🚉🚊🚋🚌🚍🚎🚏🚐🚑🚒🚓🚔🚕🚖🚗🚘🚙🚚🚛🚜🚝🚞🚟🚠🚡🚢🚣🚤🚥🚦🚧🚨🚩🚪🚫🚬🚭🚮🚯🚰🚱🚲🚳🚴🚵🚶🚷🚸🚹🚺🚻🚼🚽🚾🚿🛀🛁🛂🛃🛄🛅🛆🛇🛈🛉🛊🛋🛌🛍🛎🛏🛐🛑🛒🛕🛖🛗🛠🛡🛢🛣🛤🛥🛦🛧🛨🛩🛪🛫🛬🛰🛱🛲🛳🛴🛵🛶🛷🛸

    ×


     
    Copyright © 1999-2025 by HR.com - Maximizing Human Potential. All rights reserved.
    Example Smart Up Your Business