the team?” 4 Assumptions Scenario Part of cycle time spent on coding Part of coding supportable with coding assistant Rate of faster task completion with coding assistant Potential time saved in cycle time Very optimistic 40% 60% 55% 13%
Done CYCLE TIME How most teams measure speed Coding LEAD TIME All activities to bring one feature live Lots of things not even related to a specific feature
organisation AI Chat E.g. ChatGPT, Claude, Gemini, enterprise AI chat, … Browser Wiki … Read and write integration with knowledge bases … … … Rapid app generation E.g. Lovable, v0, Bolt, …
domain context, helps deepen domain vocabulary Unburdens the SME Helps think out of the box Too many ideas can lead to delays and lack of clarity on the team UX design is much more than visuals of individual screens Prototypes donʼt fully replace high fidelity design
For each idea, create a self-review and reflect if it's a good work package: - Is this a work package that creates value for an end user? - Is this a work package that is a vertical slice, i.e. that is touching all necessary layers of the implementation? - Is this a work package that is purely about technical setup? In that case, your review should point out that it should ideally be integrated into another more functional work package. Think about adding another functional requirement that would include this. - Is this a work package purely about a cross-functional concern, like "improve performance", or "make more user-friendly"? If so, your review should point out that this is not an ideal work package, as cross-functional requirements should be implemented as part of every single functional requirement. - Any testing or quality assurance should never be a separate work package, it should always be part of the functional work packages.
organisation Design Assistant E.g. Figma AI, Creatie, UX Pilot, … Rapid app generation E.g. Lovable, v0, Bolt, … AI Chat E.g. ChatGPT, Claude, Gemini, enterprise AI chat, … Browser Wiki Read and write integration with knowledge bases … … … Canvas support for better AIhuman collab Image and diagram generation support Issue tracker
/ practices Our problem at hand Cognitive transfer apply understand What are good models of thinking to apply to this problem? What would that look like for my situation? LLM
apply understand What are good models of thinking to apply to this problem? What would that look like for my situation? LLM Amplify a practice with a prompt 44 Reusable prompt Architectʼs require- ments
We want the ADR to have the following structure: - Title: Should always specify the decision that was taken, not the problem that is solved - Decision summary - Context relevant to the decision: should describe the status quo and goals, and describe the requirements and WHY we need the requirements; Should mention how this decision would affect the business - Options considered: Should always be more than one - Each option should start with a short description of the option - Each option should have a section called "Consequences" that describes the trade-offs for each option (positive and negative consequences) One of the "Options considered" should always be "Do nothing". ...
software delivery? Super powers of Gen AI Product ideation More comprehensive requirements More comprehensive testing Architecture Exploratory testing Brainstorming and ideation Remembering details and learning Understanding errors Providing organisational context Amplifying and socialising knowledge within a team Finding knowledge Change logs Incident management: Run books Research Documentation Summarisation and clustering Requirements to code Code to code Language to queries Standard to standard Translation
e.g. in deployment Properties of Gen AI to be wary of in software delivery Non- deterministic Not a compiler Cannot do maths (by itself) LLMs “thinkˮ in tokens Context is key! Especially in brownfield LLMs donʼt know what we donʼt tell them Generates very plausible outputs, but the devil is in the details Can lead to review fatigue Superficial plausibility
There is no “X%ˮ answer to this question. Reduced story cycle time 1020% with a coding assistant) Faster onboarding and upskilling Improved Developer Experience Higher test coverage Code quality and maintain- ability Faster feedback loops Stability: MTTR, Incidents, Availability Less delivery friction
the system. If you can code faster, can you review faster? Can you test faster? Can you ship faster? If you can code faster, can you fill the backlog faster? If you can produce more code, can you keep your technical debt in check? 62 Higher coding throughput
Thoughtworks Use of AI can easily lead to divergence rather than convergence. I have spent a lot of time revisiting design decision every time a new AI-generated idea comes up
quality control shifted right? More up-front design? AI can generate beautiful, plausible, over-detailed requirements When everything looks plausible and kind of works, we just push it on to the next step