Upgrade to Pro — share decks privately, control downloads, hide ads and more …

TypeScript Compiler Moving To GO

TypeScript Compiler Moving To GO

Avatar for Tamar Twena-Stern

Tamar Twena-Stern

May 15, 2025
Tweet

Transcript

  1. • How to match the technology to the problem domain

    ? • How understanding the runtime environment of your model impact you design decisions ? • How to change the foundations of our model ?
  2. TypeScript never run on production, It is always transpiled to

    JavaScript Write Typescript Natively Type Checks Run JavaScript after Transpilation Build phase will indicate Typescript errors
  3. The build time of TypeScript will become 10x Faster -

    And Not The runtime of the program
  4. Improvement TypeScript Compilation On Tested Projects Codebase Size (LOC) JavaScript

    Implementation GO Implementation Speedup VS Code 1,505,000 77.8s 7.5s 10.4x Playwright 356,000 11.1s 1.1s 10.1x TypeORM 270,000 17.5s 1.3s 13.5x date-fns 104,000 6.5s 0.7s 9.5x tRPC (server + client) 18,000 5.5s 0.6s 9.1x rxjs (observable) 2,100 1.1s 0.1s 11.0x
  5. Which Types Of Applications We Should Write In Node.js ?

    IO Heavy ? CPU Intensive ? Memory Heavy ?
  6. • Node.js is one of fastest web server technologies available.

    • It outperform many multi threaded web server technologies • It Shines for high-concurrency, low-computation workloads. Did You Know ?
  7. Node.js Architecture - Important Clarifications Libuv - A C library

    , handles asynchronous I/O operations Core Libraries - Written in C/C++. JavaScript APIs - Thin wrappers around Core Libraries IO Operations are almost efficient as C/C++ performance
  8. Common Activities Of A Web Server app.post('/people', async (req: Request,

    res: Response) => { const people: Person[] = req.body; if (!Array.isArray(people)) { return res.status(400).json({ message: 'Request body must be an array of persons.' }); } for (const person of people) { if ( typeof person.name !== 'string' || typeof person.age !== 'number' || typeof person.ID !== 'string' ) { return res.status(400).json({ message: 'Each person must have name (string), age (number), and ID (string).' }); } } try { const db = await connectToMongo(); const result = await db.collection('people').insertMany(people); res.status(201).json({ insertedCount: result.insertedCount, insertedIds: result.insertedIds }); } catch (error) { console.error('Insert failed:', error); res.status(500).json({ message: 'Failed to insert people.' }); } });
  9. Common Activities A Web Server Perform High Amount Of Time

    Low Amount Of Time Parsing JSON - CPU Exception - Large request body Transforming data structures - CPU Database Queries - IO Reading Files - IO Network Request - IO API calls - IO
  10. Node.js Challenge - CPU Intensive Algorithms while (true) { //

    Synchronously handle incoming events const events = getEvents(); for (const event of events) { processEvent(event); // Blocks loop } }
  11. Any compiler flow involve complex algorithms, large memory structures, and

    lots of computation- the kind of work that challenges JavaScript's execution model.
  12. goroutines—lightweight threads managed by the Go runtime • Natural Parallelism

    • Direct Thread Access: CPU-intensive operations can run directly on threads without yielding. • Efficient Coordination: Go's channels and synchronization primitives are designed to coordinate concurrent work. • Memory Efficiency: Goroutines use minimal memory (a few KB each) compared to OS threads.
  13. Approach 1 - Split Task Into Smaller Tasks Inside The

    Event Loop while (true) { // Asynchronously handle incoming events in smaller chunks const events = getEvents(); for (const event of events) { await new Promise(resolve => setTimeout(resolve, 0)); // Yield back to the event loop processEvent(event); // CPU-intensive task (now chunked) } }
  14. Problems With Split Task Into Smaller Tasks Inside The Event

    Loop • Event Loop is single-threaded. • V8 is optimized for I/O concurrency • Even with chunking, the interpreter overhead and JIT warm-up add cost. • Garbage collection pauses (especially with large ASTs or IR graphs) can introduce latency spikes. • The actual delay is governed by the browser/node event loop, and can vary. • High-load situations cause event loop lag → tasks don’t get timely execution.
  15. Approach 2 - Worker Threads // Main const { Worker

    } = require('worker_threads'); const worker = new Worker('./worker.js'); worker.on('message', result => { // Process the message });
  16. Approach 2 - Worker Threads // Worker const { parentPort

    } = require('worker_threads'); parentPort.on('message', ({ task, data }) => { if (task === 'processData') { const result = heavyComputation(data); parentPort.postMessage(result); } }); function heavyComputation(data) { // CPU-intensive task }
  17. Why Worker Threads Were Not Efficient Enough For The TypeScript

    Compiler ? • Verbose to manage. • Costly due to structured cloning (data copying). • Hard to coordinate shared state (e.g., symbol tables, IR graphs, caches).
  18. Worker threads provide parallelism but come with overhead: • Each

    worker has its own V8 instance • Data passed between threads needs to be serialized and deserialized. • Hard to coordinate shared state (symbol tables, IR graphs, caches).
  19. Comparing Worker Threads And Goroutimes Node.js Worker Threads Go Goroutines

    Concurrency Model OS-level threads, manually created Lightweight coroutines, managed by Go runtime Startup Overhead High (new V8 instance, event loop) Tiny (~2KB per goroutine), near-zero startup cost Memory Sharing Structured cloning or SharedArrayBuffer Shared memory with channels or mutexes Execution Control Manual queueing and message passing Built-in scheduler with preemption
  20. Comparing Worker Threads And Goroutimes Node.js Worker Threads Go Goroutines

    Code Complexity Async APIs + worker management = verbose Simple sync code with powerful concurrency primitives Performance Good for limited parallel tasks; costly to scale Excellent for high-volume, fine-grained concurrency Suitability for Compiler Feasible but harder to scale, debug, and optimize Ideal for parsing, analyzing, and transforming in parallel GC Behavior One V8 GC per thread (heavy) Unified, efficient GC tuned for concurrency
  21. Simulating preemption in Node.js is clever and can work for

    lightweight workloads, but not for heavy load computations that require real performance, parallelism, and control over execution flow. Compilers are just one example.