Gyaan

JavaScript

All 43 notes on one page

Fundamentals

1

let, const, and var

beginner variables ES6 scope

Understanding the real uses and differences between var, let, and const is fundamental to modern JavaScript.

var is function-scoped, meaning it is accessible anywhere within the function it is declared in. let and const are block-scoped, meaning they are only accessible within the block {} they are declared in.

function example() {
if (true) {
var a = 10
let b = 20
const c = 30
}
← var: accessible outside ← let: block only ← const: block only
console.log(a) ✓
console.log(b) ✗ ReferenceError
}
function example() {
  if (true) {
    var a = 10;
    let b = 20;
    const c = 30;
  }
  console.log(a); // 10 (var is function-scoped)
  // console.log(b); // ReferenceError (let is block-scoped)
  // console.log(c); // ReferenceError (const is block-scoped)
}

var gets hoisted and set to undefined, so we can access it before the declaration line. But let and const sit in a Temporal Dead Zone until the code reaches the declaration line — trying to use them before that will throw a ReferenceError.

console.log(a); // undefined (var is hoisted and initialized)
// console.log(b); // ReferenceError (temporal dead zone)

var a = 1;
let b = 2;

const cannot be reassigned after declaration. However, if the value is an object or array, the properties or elements inside it can still be modified.

const user = { name: "Manish" };
user.name = "Pika"; // This works, object properties can be modified
// user = {}; // TypeError: Assignment to constant variable

In simple language, use const by default, use let when you need to reassign, and avoid var in modern JavaScript.


2

Data Types & Type Coercion

beginner types coercion equality truthy falsy

JavaScript has two categories of data types — primitive and reference. Understanding this distinction is key because they behave very differently in terms of how they are stored and compared.

Primitive types

There are 7 primitive types. These are immutable — when we “change” a string or number, we are actually creating a new value.

string
"hello"
number
42, 3.14, NaN
boolean
true, false
null
intentional empty
undefined
not assigned
symbol
unique identifier
bigint
9007199254740991n

Primitives are stored directly in the stack — the variable holds the actual value.

Reference types

Reference types include objects, arrays, and functions. These are stored in the heap, and the variable holds a reference (pointer) to the memory location.

const a = { name: "Manish" };
const b = a; // b points to the same object in memory
b.name = "Pika";
console.log(a.name); // "Pika" — both reference the same object

typeof quirks

The typeof operator has a couple of famous gotchas that come up in interviews all the time:

typeof "hello"     // "string"
typeof 42          // "number"
typeof true        // "boolean"
typeof undefined   // "undefined"
typeof Symbol()    // "symbol"
typeof 10n         // "bigint"
typeof {}          // "object"
typeof []          // "object"  — arrays are objects!
typeof null        // "object"  — this is a known JS bug since day 1
typeof NaN         // "number"  — NaN is technically "Not a Number" but its type is number
typeof function(){} // "function"

The typeof null === "object" is a bug from the very first version of JavaScript that was never fixed for backward compatibility reasons. To check for null, just use value === null.

Type coercion: == vs ===

This is one of the most asked interview questions. In simple language:

  • === (strict equality) compares value and type — no conversion happens
  • == (loose equality) converts the values to the same type first, then compares
0 == ""        // true  — both coerced to 0
0 === ""       // false — number vs string
false == 0     // true  — false becomes 0
false === 0    // false — boolean vs number
null == undefined  // true  — special rule, they are loosely equal
null === undefined // false — different types
"1" == 1       // true  — string "1" converted to number 1
"1" === 1      // false — string vs number

The golden rule is: always use === unless you have a very specific reason to use ==. The only common exception is checking value == null which catches both null and undefined in one check.

Truthy and falsy values

In JavaScript, every value is either “truthy” or “falsy” — meaning it evaluates to true or false when used in a boolean context (like an if statement).

There are exactly 8 falsy values in JavaScript. Everything else is truthy.

Falsy (8 values)
false
0
-0
0n (BigInt zero)
"" (empty string)
null
undefined
NaN
Truthy (everything else)
"0" (non-empty string)
"false" (non-empty string)
[] (empty array!)
{} (empty object!)
function(){}
42, -1, Infinity
...any non-falsy value

A common gotcha: empty arrays [] and empty objects {} are truthy. This trips up a lot of people.

if ([])  console.log("truthy"); // prints "truthy" — empty array is truthy!
if ({})  console.log("truthy"); // prints "truthy" — empty object is truthy!
if ("0") console.log("truthy"); // prints "truthy" — non-empty string is truthy!
if (0)   console.log("truthy"); // does NOT print — 0 is falsy

Explicit coercion

We can manually convert types using Number(), String(), and Boolean():

Number("42")      // 42
Number("")        // 0
Number("hello")   // NaN
Number(true)      // 1
Number(null)      // 0
Number(undefined) // NaN

String(42)        // "42"
String(null)      // "null"
String(undefined) // "undefined"

Boolean(0)        // false
Boolean("")       // false
Boolean("hello")  // true
Boolean([])       // true — again, empty array is truthy!

Coercion gotchas

These are the kind of examples interviewers love to throw at you:

"5" + 3      // "53"  — + with a string triggers string concatenation
"5" - 3      // 2     — minus always does math, so "5" becomes 5
true + true  // 2     — true is 1, so 1 + 1
[] + []      // ""    — both become empty strings, "" + ""
[] + {}      // "[object Object]" — array becomes "", object becomes "[object Object]"
{} + []      // 0     — {} is parsed as empty block, so it's just +[] which is 0

In simple language, the + operator is the main troublemaker. If either side is a string, it does string concatenation. All other math operators (-, *, /) will try to convert both sides to numbers.


3

Hoisting

beginner hoisting scope TDZ variables functions

Hoisting is one of those JavaScript behaviors that confuses a lot of beginners. In simple language, hoisting means JavaScript moves declarations to the top of their scope before the code actually runs. But the key detail is — only declarations are hoisted, not initializations.

Think of it like this: before your code runs, JavaScript does a quick scan and says “okay, I see these variables and functions exist” — but it doesn’t assign values yet (for var).

var hoisting

Variables declared with var are hoisted to the top of their function scope and initialized with undefined. This means we can access them before the line where they are declared, but the value will be undefined.

console.log(name); // undefined (not an error!)
var name = "Manish";
console.log(name); // "Manish"

What JavaScript actually sees during execution:

var name; // declaration hoisted, initialized with undefined
console.log(name); // undefined
name = "Manish"; // assignment stays in place
console.log(name); // "Manish"

let and const hoisting (Temporal Dead Zone)

let and const are also hoisted — but they are not initialized. They sit in something called the Temporal Dead Zone (TDZ) from the start of the block until the declaration line. Trying to access them in the TDZ throws a ReferenceError.

Block start {
Temporal Dead Zone
accessing x here = ReferenceError
let x = 10; -- TDZ ends here
Safe to use
x is initialized and accessible
}
// console.log(age); // ReferenceError: Cannot access 'age' before initialization
let age = 25;
console.log(age); // 25

The same applies to const. The important thing to remember is — let and const are hoisted (JavaScript knows they exist), but they are not accessible until the declaration line.

Function declaration hoisting

Function declarations are fully hoisted — both the name and the body. This means we can call a function before it appears in the code. This is the one case where hoisting actually feels useful.

greet(); // "Hello!" — works perfectly!

function greet() {
  console.log("Hello!");
}

Function expressions and arrow functions are NOT hoisted

Function expressions (including arrow functions) behave like variable assignments. The variable is hoisted according to var/let/const rules, but the function body is not. So we cannot call them before the assignment.

greet(); // "Hello!"
function greet() { console.log("Hello!"); }

// sayHi(); // ReferenceError
const sayHi = () => { console.log("Hi!"); };

// hello(); // TypeError / ReferenceError
const hello = function() { console.log("Hello!"); };

If we used var instead of const, we would get a TypeError (because the variable is undefined, and undefined is not a function). With const/let, we get a ReferenceError because of the Temporal Dead Zone.

// sayHi(); // TypeError: sayHi is not a function
var sayHi = () => { console.log("Hi!"); };

// greetFn(); // ReferenceError: Cannot access 'greetFn' before initialization
const greetFn = () => { console.log("Hey!"); };

Quick summary

Declaration
Hoisted?
Initialized?
var
Yes
undefined
let / const
Yes
No (TDZ)
function declaration
Yes
Yes (full body)
function expression / arrow
Like its variable
No

In simple language, only function declarations are fully hoisted and usable before their line. Everything else either gives undefined (var) or throws an error (let/const). When in doubt, just declare things before you use them and you will never have hoisting issues.


4

use strict

beginner strict-mode best-practices

Strict mode makes it easier to write “secure” JavaScript.

  • Strict mode changes previously accepted “bad syntax” into real errors.
  • In normal JavaScript, mistyping a variable name creates a new global variable. In strict mode, this will throw an error, making it impossible to accidentally create a global variable.
  • In normal JavaScript, a developer will not receive any error feedback assigning values to non-writable properties.
  • In strict mode, any assignment to a non-writable property, a getter-only property, a non-existing property, a non-existing variable, or a non-existing object, will throw an error.
// Without strict mode — silently creates a global variable
function withoutStrict() {
  myVar = 10; // No error, creates a global variable
}

// With strict mode — throws an error
function withStrict() {
  "use strict";
  myVar = 10; // ReferenceError: myVar is not defined
}
"use strict";

// Cannot delete a variable or function
let x = 10;
// delete x; // SyntaxError

// Duplicate parameter names are not allowed
// function sum(a, a) {} // SyntaxError

// Cannot use reserved keywords as variable names
// let private = 10; // SyntaxError

5

Destructuring

beginner ES6 destructuring objects arrays

Destructuring is an ES6 feature that lets us unpack values from arrays or properties from objects into individual variables. Instead of accessing things one by one, we can grab multiple values in a single line. It makes our code cleaner and easier to read.

Object destructuring

The basic idea is simple — we use curly braces on the left side of the assignment and match the property names:

const user = { name: "Manish", age: 25, city: "Pune" };

const { name, age } = user;
console.log(name); // "Manish"
console.log(age);  // 25

Rename variables

Sometimes the property name isn’t what we want to use in our code. We can rename it with a colon:

const { name: fullName, age: userAge } = user;
console.log(fullName); // "Manish"
console.log(userAge);  // 25

Default values

If a property doesn’t exist, we can provide a fallback:

const { name, country = "India" } = user;
console.log(country); // "India" (user doesn't have a country property)

Array destructuring

For arrays, we use square brackets. The values are assigned based on position, not name:

const colors = ["red", "green", "blue"];

const [first, second, third] = colors;
console.log(first);  // "red"
console.log(second); // "green"

Skip elements

We can skip elements by leaving empty spots with commas:

const [, , third] = colors;
console.log(third); // "blue"

Rest with destructuring

We can use ...rest to collect the remaining elements into a new array:

const [head, ...tail] = [1, 2, 3, 4, 5];
console.log(head); // 1
console.log(tail); // [2, 3, 4, 5]

Nested destructuring

We can destructure nested objects and arrays too. Just match the structure:

const user = {
  name: "Manish",
  address: {
    city: "Pune",
    zip: "411001"
  }
};

const { address: { city, zip } } = user;
console.log(city); // "Pune"
console.log(zip);  // "411001"

For nested arrays:

const matrix = [[1, 2], [3, 4]];
const [[a, b], [c, d]] = matrix;
console.log(a, d); // 1, 4

Function parameter destructuring

This is one of the most practical uses of destructuring. Instead of passing an object and accessing properties inside the function, we can destructure right in the parameter:

// Without destructuring
function greet(user) {
  console.log(`Hi ${user.name}, age ${user.age}`);
}

// With destructuring — much cleaner
function greet({ name, age }) {
  console.log(`Hi ${name}, age ${age}`);
}

greet({ name: "Manish", age: 25 }); // "Hi Manish, age 25"

This pattern is super common in React components and API handlers.

Practical use: swapping variables

Before destructuring, swapping two variables required a temporary variable. Now we can do it in one line:

let a = 1;
let b = 2;

[a, b] = [b, a];

console.log(a); // 2
console.log(b); // 1

In simple language, destructuring is just a shorthand for pulling out values. Curly braces {} for objects (match by name), square brackets [] for arrays (match by position). Once we get used to it, we will use it everywhere.


6

Spread & Rest Operators

beginner ES6 spread rest arrays objects

The ... syntax in JavaScript does two different things depending on where we use it. This confuses a lot of people because they look identical. The simple rule is:

  • Spread = expands/unpacks elements (used when we are providing values)
  • Rest = collects/gathers elements (used when we are receiving values)
Spread (expands)
[...arr] — copy array
{...obj} — copy object
fn(...args) — pass as args
[1, 2, 3] becomes 1, 2, 3
Rest (collects)
fn(...params) — gather args
[a, ...rest] — remaining items
{a, ...rest} — remaining props
1, 2, 3 becomes [1, 2, 3]

Spread with arrays

We can use spread to copy arrays, merge them, or pass array elements as function arguments:

const nums = [1, 2, 3];

// Copy an array (shallow copy)
const copy = [...nums];

// Merge arrays
const more = [0, ...nums, 4, 5]; // [0, 1, 2, 3, 4, 5]

// Pass as function arguments
console.log(Math.max(...nums)); // 3

Spread with objects

Same idea works with objects. We can copy, merge, or override properties:

const user = { name: "Manish", age: 25 };

// Copy an object (shallow copy)
const copy = { ...user };

// Merge objects (later properties override earlier ones)
const updated = { ...user, age: 26, city: "Pune" };
// { name: "Manish", age: 26, city: "Pune" }

When merging objects, if two objects have the same key, the last one wins:

const defaults = { theme: "light", lang: "en" };
const prefs = { theme: "dark" };
const config = { ...defaults, ...prefs };
// { theme: "dark", lang: "en" } — prefs overrides defaults

Rest in function parameters

Rest collects all remaining arguments into an array. This replaces the old arguments object with a real array:

function sum(...numbers) {
  return numbers.reduce((total, n) => total + n, 0);
}

console.log(sum(1, 2, 3, 4)); // 10

We can also combine regular parameters with rest — but rest must always be last:

function log(level, ...messages) {
  messages.forEach(msg => console.log(`[${level}] ${msg}`));
}

log("INFO", "Server started", "Listening on port 3000");

Rest in destructuring

Rest works in both array and object destructuring to capture “everything else”:

// Array destructuring with rest
const [first, ...remaining] = [1, 2, 3, 4];
console.log(first);     // 1
console.log(remaining); // [2, 3, 4]

// Object destructuring with rest
const { name, ...details } = { name: "Manish", age: 25, city: "Pune" };
console.log(name);    // "Manish"
console.log(details); // { age: 25, city: "Pune" }

Common interview question

What is the difference between spread and rest?

They use the same ... syntax but do opposite things. Spread expands an iterable into individual elements (we are giving values out). Rest collects multiple individual elements into a single array or object (we are gathering values in). The context tells us which one it is — if ... appears in a function definition or destructuring pattern, it is rest. If it appears in a function call, array literal, or object literal, it is spread.

// Spread — expanding values
const arr = [...[1, 2], ...[3, 4]]; // [1, 2, 3, 4]

// Rest — collecting values
const [first, ...rest] = arr; // first = 1, rest = [2, 3, 4]

In simple language, spread is like opening a box and spilling everything out. Rest is like sweeping everything remaining into a box.


7

Template Literals

beginner ES6 strings template-literals

Template literals (introduced in ES6) are strings wrapped in backticks (`) instead of single or double quotes. They give us two main superpowers: expression interpolation and multi-line strings.

Expression interpolation

Instead of concatenating strings with +, we can embed expressions directly using ${expression}:

const name = "Manish";
const age = 25;

// Old way
const msg1 = "Hi, I'm " + name + " and I'm " + age + " years old.";

// Template literal — much cleaner
const msg2 = `Hi, I'm ${name} and I'm ${age} years old.`;

We can put any valid JavaScript expression inside ${} — not just variables:

console.log(`2 + 3 = ${2 + 3}`);           // "2 + 3 = 5"
console.log(`Uppercase: ${"hello".toUpperCase()}`); // "Uppercase: HELLO"
console.log(`Is adult: ${age >= 18 ? "Yes" : "No"}`); // "Is adult: Yes"

Multi-line strings

With regular quotes, creating a multi-line string requires \n. With backticks, we just press Enter and the line breaks are preserved:

// Old way
const old = "Line 1\n" +
            "Line 2\n" +
            "Line 3";

// Template literal
const html = `
  <div>
    <h1>Hello</h1>
    <p>World</p>
  </div>
`;

This is super handy when writing HTML strings, SQL queries, or any multi-line text in our code.

Tagged templates

This is a more advanced feature, but worth knowing about. A tagged template lets us parse template literals with a custom function. The function receives the string parts and the interpolated values separately:

function highlight(strings, ...values) {
  return strings.reduce((result, str, i) => {
    return result + str + (values[i] ? `<mark>${values[i]}</mark>` : "");
  }, "");
}

const name = "Manish";
const role = "developer";
const output = highlight`My name is ${name} and I'm a ${role}.`;
// "My name is <mark>Manish</mark> and I'm a <mark>developer</mark>."

Tagged templates are used in libraries like styled-components (CSS-in-JS) and graphql-tag (GraphQL queries). We might not write tagged templates ourselves every day, but it is good to understand how they work when we see them in the wild.

In simple language, template literals are just a better way to write strings. Use backticks, drop in variables with ${}, and enjoy multi-line strings without \n. Once we start using them, we will never want to go back to string concatenation.


Functions

8

Arrow Functions

beginner ES6 functions

Introduced in ES6, arrow functions allow us to write shorter function syntax:

let myFunction = (a, b) => a * b;

The most important difference is that arrow functions do not have their own this. They take this from the parent scope where they are defined.

const person = {
  name: "Manish",
  greetRegular: function() {
    console.log(this.name); // "Manish" (this refers to person)
  },
  greetArrow: () => {
    console.log(this.name); // undefined (this refers to parent/window scope)
  }
};

If there is only one parameter, we can skip the parentheses. If the body is a single expression, we can also skip the curly braces and return keyword — the value is returned automatically.

const double = x => x * 2;

const add = (a, b) => a + b;

const getUser = () => ({ name: "Manish" }); // wrap object in () for implicit return

Arrow functions cannot be used as constructors, which means we cannot use new keyword with them — it will throw a TypeError.


9

Higher-Order Functions

intermediate functions functional-programming array-methods map filter reduce

A higher-order function is any function that does at least one of these two things:

  1. Takes a function as an argument (a callback)
  2. Returns a function

That is it. If a function does either of those, it is a higher-order function. We have already been using them — setTimeout, addEventListener, and all the array methods like map and filter are higher-order functions.

Array methods — the most common higher-order functions

These are the ones that come up in interviews all the time. Let’s go through each one.

Input Array: [1, 2, 3, 4, 5]
map(x => x * 2)
transforms each
[2, 4, 6, 8, 10]
filter(x => x > 2)
keeps matching
[3, 4, 5]
reduce(sum)
accumulates to one
15

map — transform every element

map creates a new array by running a function on every element. The original array is not modified.

const nums = [1, 2, 3, 4];
const doubled = nums.map(n => n * 2);
console.log(doubled); // [2, 4, 6, 8]

filter — keep elements that pass a test

filter creates a new array with only the elements where the callback returns true.

const nums = [1, 2, 3, 4, 5, 6];
const evens = nums.filter(n => n % 2 === 0);
console.log(evens); // [2, 4, 6]

reduce — boil down to a single value

reduce is the most powerful (and most confusing) array method. It takes a callback and an initial value, and reduces the array to a single value by accumulating.

const nums = [1, 2, 3, 4];
const sum = nums.reduce((acc, curr) => acc + curr, 0);
console.log(sum); // 10

forEach — do something with each element (no return)

forEach runs a function on every element but does not return anything. It is for side effects (like logging or modifying external state).

const names = ["Manish", "Pika", "Luna"];
names.forEach(name => console.log(`Hello, ${name}!`));

find — get the first match

find returns the first element that matches the condition, or undefined if nothing matches.

const users = [{ id: 1, name: "Manish" }, { id: 2, name: "Pika" }];
const found = users.find(u => u.id === 2);
console.log(found); // { id: 2, name: "Pika" }

some and every — boolean checks

some returns true if at least one element passes. every returns true if all elements pass.

const nums = [1, 2, 3, 4, 5];
console.log(nums.some(n => n > 4));  // true  — 5 is greater than 4
console.log(nums.every(n => n > 0)); // true  — all are positive
console.log(nums.every(n => n > 3)); // false — 1, 2, 3 fail the check

Custom higher-order function

We can write our own higher-order functions too. Here is one that returns a function:

function createMultiplier(factor) {
  return function(number) {
    return number * factor;
  };
}

const double = createMultiplier(2);
const triple = createMultiplier(3);

console.log(double(5)); // 10
console.log(triple(5)); // 15

This is also an example of closures — the returned function “remembers” the factor from its parent scope. Higher-order functions and closures go hand in hand.

Why higher-order functions matter

They let us write declarative code — we describe what we want, not how to do it step by step. Compare:

// Imperative (how)
const results = [];
for (let i = 0; i < nums.length; i++) {
  if (nums[i] > 2) results.push(nums[i] * 2);
}

// Declarative (what) — using higher-order functions
const results = nums.filter(n => n > 2).map(n => n * 2);

In simple language, higher-order functions are just functions that work with other functions. They are everywhere in JavaScript and are a must-know for interviews. Master map, filter, and reduce and we are 80% there.


10

Closures

intermediate scope functions lexical-environment

A closure is the combination of a function bundled together (enclosed) with references to its surrounding state (the lexical environment). In other words, a closure gives you access to an outer function’s scope from an inner function. In JavaScript, closures are created every time a function is created, at function creation time.

One of the most common uses of closures is data privacy. We can create variables inside a function that cannot be accessed from outside.

createCounter() scope
let count = 0  (private)
increment()
count++ ✓
getCount()
return count ✓
Outside: count ✗ ReferenceError
function createCounter() {
  let count = 0; // private variable, not accessible from outside
  return {
    increment: function() {
      count++;
    },
    getCount: function() {
      return count;
    }
  };
}

const counter = createCounter();
counter.increment();
counter.increment();
console.log(counter.getCount()); // 2
// console.log(count); // ReferenceError: count is not defined

A very common interview question is closures inside loops. When we use var in a loop with setTimeout, all the callbacks share the same i variable, so they all print the final value instead of each value.

// Problem with var
for (var i = 0; i < 3; i++) {
  setTimeout(function() {
    console.log(i);
  }, 1000);
}
// Output: 3, 3, 3 (all reference the same i)

// Fix with let (block-scoped, creates new binding each iteration)
for (let i = 0; i < 3; i++) {
  setTimeout(function() {
    console.log(i);
  }, 1000);
}
// Output: 0, 1, 2

11

Currying

intermediate functional-programming functions

Currying is a technique in functional programming — transformation of the function of multiple arguments into several functions of a single argument in sequence.

// From This
function calculate(a, b, c) {
  return a + b + c;
}
calculate(1, 2, 3); // 6

// To this
const calculateCurried = (a) => {
  return (b) => {
    return (c) => {
      return a + b + c;
    }
  }
}

calculateCurried(1)(2)(3) // 6

Currying is useful when we want to create new functions from an existing one by pre-filling some arguments. For example, from a general multiply function we can create double and triple:

const multiply = (a) => (b) => a * b;

const double = multiply(2);
const triple = multiply(3);

console.log(double(5)); // 10
console.log(triple(5)); // 15

12

Memoization

intermediate performance optimization caching

Memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

In simple language, if a function is called with the same arguments again, return the stored result instead of computing it again.

function memoize(fn) {
  const cache = {};
  return function(...args) {
    const key = JSON.stringify(args);
    if (cache[key]) {
      console.log("Returning from cache");
      return cache[key];
    }
    const result = fn(...args);
    cache[key] = result;
    return result;
  };
}

const expensiveAdd = (a, b) => {
  console.log("Computing...");
  return a + b;
};

const memoizedAdd = memoize(expensiveAdd);

memoizedAdd(1, 2); // "Computing..." → 3
memoizedAdd(1, 2); // "Returning from cache" → 3
memoizedAdd(3, 4); // "Computing..." → 7

Memoization works best with functions that always give the same output for the same input (these are called pure functions). It is commonly used in recursive functions like fibonacci, caching API responses, and heavy calculations in UI rendering.


13

Call, Apply and Bind

intermediate this functions context

But first, what is this?

In JavaScript, this refers to the object that is currently calling the function. But the tricky part is — this changes depending on how the function is called, not where it is written.

const user = {
  name: "Manish",
  greet: function() {
    console.log("Hello, " + this.name);
  }
};

user.greet(); // "Hello, Manish" — this = user ✓

This works fine. But what happens when we take the function out of the object?

const greetFn = user.greet;
greetFn(); // "Hello, undefined" — this = window (or undefined in strict mode) ✗

The moment we store the method in a variable and call it separately, this is no longer pointing to user. It lost its context. This is the most common problem in JavaScript.

The problem: losing this

This happens in many real-world situations:

const user = {
  name: "Manish",
  greet: function() {
    console.log("Hello, " + this.name);
  }
};

// Problem 1: passing method as a callback
setTimeout(user.greet, 1000); // "Hello, undefined" ✗

// Problem 2: extracting method to a variable
const fn = user.greet;
fn(); // "Hello, undefined" ✗

// Problem 3: using in event handlers
button.addEventListener("click", user.greet); // this = button, not user ✗

In all these cases, we lose the original this context. This is exactly why call(), apply(), and bind() exist — they let us manually set what this should be.

bind()
Returns a new function
Does NOT call immediately
Use: save for later
call()
Calls immediately
Args: one by one
fn.call(obj, a, b)
apply()
Calls immediately
Args: as an array
fn.apply(obj, [a, b])

Bind

The bind() method creates a new function that, when called, has its this keyword set to the provided value. It does not call the function immediately — it returns a copy with this permanently fixed.

var pokemon = {
    firstname: 'Pika',
    lastname: 'Chu ',
    getPokeName: function() {
        var fullname = this.firstname + ' ' + this.lastname;
        return fullname;
    }
};

var pokemonName = function() {
    console.log(this.getPokeName() + 'I choose you!');
};

var logPokemon = pokemonName.bind(pokemon);
// creates new object and binds pokemon. 'this' of pokemon === pokemon now

logPokemon(); // 'Pika Chu I choose you!'

When we use the bind() method:

  1. The JS engine is creating a new pokemonName instance and binding pokemon as its this variable. It is important to understand that it copies the pokemonName function.
  2. After creating a copy of the pokemonName function it is able to call logPokemon(), although it wasn’t on the pokemon object initially. It will now recognize its properties (Pika and Chu) and its methods.

After we bind() a value we can use the function just like it was any other normal function.

When to use bind in real life

The most common use case is when passing object methods as callbacks:

const user = {
  name: "Manish",
  greet: function() {
    console.log("Hello, " + this.name);
  }
};

// Without bind — this is lost
setTimeout(user.greet, 1000); // "Hello, undefined" ✗

// With bind — this is permanently fixed
setTimeout(user.greet.bind(user), 1000); // "Hello, Manish" ✓

Call & Apply

The call() method calls a function immediately with a given this value and arguments provided individually. We can call any function, and explicitly specify what this should reference within the calling function.

The main differences between bind() and call():

  1. Accepts additional parameters as well
  2. Executes the function it was called upon right away.
  3. The call() method does not make a copy of the function it is being called on.

call() and apply() serve the exact same purpose. The only difference is that call() expects all parameters to be passed in individually, whereas apply() expects an array of all parameters.

var pokemon = {
    firstname: 'Pika',
    lastname: 'Chu ',
    getPokeName: function() {
        var fullname = this.firstname + ' ' + this.lastname;
        return fullname;
    }
};

var pokemonName = function(snack, hobby) {
    console.log(this.getPokeName() + ' loves ' + snack + ' and ' + hobby);
};

pokemonName.call(pokemon, 'sushi', 'algorithms');
// Pika Chu  loves sushi and algorithms

pokemonName.apply(pokemon, ['sushi', 'algorithms']);
// Pika Chu  loves sushi and algorithms

When to use call/apply in real life

A common use case is borrowing methods from one object and using them on another:

const person1 = {
  name: "Manish",
  introduce: function(city, job) {
    console.log(this.name + " from " + city + ", works as " + job);
  }
};

const person2 = { name: "Rahul" };

// person2 doesn't have introduce(), but we can borrow it from person1
person1.introduce.call(person2, "Mumbai", "Developer");
// "Rahul from Mumbai, works as Developer"

Another classic use case is using Math.max with an array — Math.max does not accept an array, so we use apply:

const numbers = [5, 2, 9, 1, 7];

Math.max.apply(null, numbers); // 9

// In modern JS, we can also use spread operator
Math.max(...numbers); // 9

Quick summary

In simple language — all three methods let us control what this points to. Use bind() when we want to fix this for later use (callbacks, event handlers). Use call() or apply() when we want to borrow a method and run it immediately on a different object. The only difference between call and apply is how we pass arguments — one by one or as an array.


14

IIFE (Immediately Invoked Function Expression)

intermediate functions scope IIFE module-pattern

An IIFE (pronounced “iffy”) is a function that runs immediately after it is defined. We do not give it a name, we do not store it in a variable — we define it and call it in one go.

Syntax

There are two common ways to write an IIFE:

// Classic function
(function() {
  console.log("I run immediately!");
})();

// Arrow function
(() => {
  console.log("I also run immediately!");
})();

The outer parentheses (function(){}) turn the function declaration into a function expression, and the trailing () immediately invokes it. Without the wrapping parentheses, JavaScript would see it as a function declaration and throw a syntax error.

We can also pass arguments to an IIFE:

(function(name) {
  console.log(`Hello, ${name}!`);
})("Manish"); // "Hello, Manish!"

Why use an IIFE?

The main reason is to avoid polluting the global scope. Any variables declared inside an IIFE are private — they cannot be accessed from outside.

(function() {
  var secret = "hidden";
  let also_secret = "also hidden";
  console.log(secret); // "hidden"
})();

// console.log(secret); // ReferenceError — not accessible

Before ES6 gave us let, const, and block scoping, IIFEs were the only way to create a private scope. With var being function-scoped, wrapping code in an IIFE was the standard way to keep variables contained.

The module pattern

One of the most common uses of IIFEs was creating modules with private state and public methods. This was the go-to pattern before ES modules existed:

const counter = (function() {
  let count = 0; // private variable

  return {
    increment() { count++; },
    decrement() { count--; },
    getCount()  { return count; }
  };
})();

counter.increment();
counter.increment();
console.log(counter.getCount()); // 2
// console.log(count); // ReferenceError — count is private

This combines IIFEs with closures — the returned object’s methods “close over” the count variable, keeping it private while exposing a clean public API.

Is IIFE still relevant?

Honestly, most of what IIFEs solved is now handled by better tools:

  • Block scopinglet and const are block-scoped, so we do not need a function wrapper just for scoping
  • Modules — ES modules (import/export) give us proper file-level scoping and dependency management
  • Bundlers — tools like Webpack and Vite handle module isolation for us

That said, IIFEs are still worth knowing for a few reasons:

  1. Interviews — they come up often, especially in the context of closures and scoping
  2. Legacy code — tons of existing JavaScript uses IIFEs
  3. Quick isolation — sometimes we still want to run a one-off block without leaking variables, and an IIFE is a clean way to do it
  4. Top-level await workaround — in environments that do not support top-level await, wrapping in an async IIFE is common:
(async () => {
  const data = await fetch("/api/data");
  const json = await data.json();
  console.log(json);
})();

In simple language, an IIFE is a function that calls itself immediately. It was a big deal before ES6, mainly for creating private scopes. Today we have better tools, but understanding IIFEs helps us read older code and answer interview questions confidently.


Scope & Execution

15

Lexical Scope

intermediate scope closures

A lexical scope in JavaScript means that a variable defined outside a function can be accessible inside another function defined after the variable declaration. But the opposite is not true; the variables defined inside a function will not be accessible outside that function.

In simple language, lexical scope is a variable defined outside your scope or upper scope is automatically available inside your scope which means you don’t need to pass it there.

Global Scope
var x = 2
add() Scope
var y = 1
✓ can access x from outer scope
✗ cannot access y here
var x = 2;
var add = function() {
    var y = 1;
    return x + y;
};

16

Execution Context

intermediate execution-context hoisting scope-chain call-stack

Every time JavaScript runs code, it creates something called an Execution Context. Think of it as a box that holds everything the code needs to run — the variables, functions, and the value of this.

Two types of Execution Context

  1. Global Execution Context (GEC) — Created when the script first runs. There’s only one. It creates the window object (in browsers) and sets this to window.
  2. Function Execution Context (FEC) — Created every time a function is called. Each function gets its own execution context.

Two phases of every Execution Context

Every execution context goes through two phases:

1. Creation Phase (Memory Allocation)

Before running a single line of code, JavaScript scans the code and:

  • Allocates memory for variables and sets them to undefined
  • Stores the entire function declarations in memory
  • Determines the value of this
  • Sets up the Scope Chain (reference to the outer environment)

This is why hoisting works — variables and functions are already in memory before the code runs.

2. Execution Phase

Now JavaScript goes through the code line by line:

  • Assigns actual values to variables
  • Executes function calls (which create new execution contexts)
  • Runs all the logic
var name = "Manish";
function greet() {
  var message = "Hello";
  console.log(message + " " + name);
}
greet();

// Creation Phase:
//   name → undefined, greet → full function, this → window
// Execution Phase:
//   name → "Manish", greet() called → new execution context created

The Call Stack

JavaScript uses a Call Stack to manage execution contexts. When a function is called, its execution context is pushed onto the stack. When it returns, it’s popped off.

Call Stack (grows upward)
inner() EC
pushed when inner() is called
outer() EC
pushed when outer() is called
Global EC
always at the bottom, created first
When inner() finishes → popped off → back to outer() → popped off → back to Global
function outer() {
  var a = 10;
  function inner() {
    var b = 20;
    console.log(a + b); // 30 — inner can access outer's variables via scope chain
  }
  inner(); // new execution context pushed onto stack
}
outer(); // new execution context pushed onto stack

Variable Environment and Scope Chain

Each execution context has a Variable Environment — the place where its local variables live. It also has a reference to its outer environment (the parent scope). This chain of references is the Scope Chain.

When JavaScript needs to find a variable, it first looks in the current Variable Environment. If it’s not there, it follows the Scope Chain upward until it either finds the variable or reaches the Global scope.

var global = "I'm global";

function first() {
  var a = "I'm in first";
  function second() {
    var b = "I'm in second";
    console.log(b);      // found in current scope
    console.log(a);      // found via scope chain (first's scope)
    console.log(global); // found via scope chain (global scope)
  }
  second();
}
first();

In simple language, every time JavaScript runs your code or calls a function, it creates a little environment with two phases — first it sets up memory (creation), then it runs the code (execution). These environments stack on top of each other in the Call Stack, and each one can look up to its parent for variables it doesn’t have locally.


17

The this Keyword

intermediate this context arrow-functions classes

The this keyword in JavaScript is one of the most confusing concepts. Unlike most languages where this always refers to the current instance, in JavaScript this depends on how the function is called, not where it’s defined.

Let’s go through every scenario.

Global
this = window
(or global in Node)
Object Method
this = the object
the one calling the method
Regular Function
this = window
undefined in strict mode
Arrow Function
this = parent's this
inherits, never its own
Class
this = the instance
the new object created
Event Handler
this = the element
that received the event

1. this in Global Context

In the global scope (outside any function), this refers to the global object.

console.log(this); // window (in browser) or global (in Node.js)

var name = "Manish";
console.log(this.name); // "Manish" — var attaches to window

2. this in Object Methods

When a function is called as a method of an object, this refers to the object that owns the method.

const user = {
  name: "Manish",
  greet() {
    console.log(this.name); // "Manish" — this = user
  }
};
user.greet();

3. this in Regular Functions

In a regular function (not called as an object method), this defaults to window. In strict mode, it’s undefined.

function showThis() {
  console.log(this); // window (or undefined in strict mode)
}
showThis();

"use strict";
function showThisStrict() {
  console.log(this); // undefined
}
showThisStrict();

4. this in Arrow Functions

Arrow functions do not have their own this. They inherit this from the enclosing lexical scope (the parent). This is the biggest difference from regular functions.

const user = {
  name: "Manish",
  greet: () => {
    console.log(this.name); // undefined — arrow inherits global this, not user
  },
  delayedGreet() {
    setTimeout(() => {
      console.log(this.name); // "Manish" — arrow inherits this from delayedGreet
    }, 1000);
  }
};

user.greet();        // undefined (this = window, not user)
user.delayedGreet(); // "Manish" (arrow inherits this from the method)

This is why arrow functions are perfect for callbacks inside methods — they keep the parent’s this.

5. this in Classes

Inside a class, this refers to the instance being created.

class User {
  constructor(name) {
    this.name = name; // this = the new instance
  }
  greet() {
    console.log(`Hi, I'm ${this.name}`);
  }
}

const user = new User("Manish");
user.greet(); // "Hi, I'm Manish" — this = user instance

6. this in Event Handlers

In a DOM event handler, this refers to the element that received the event.

const button = document.querySelector("button");
button.addEventListener("click", function() {
  console.log(this); // the <button> element
});

// But with arrow function, this is inherited from outer scope
button.addEventListener("click", () => {
  console.log(this); // window — not the button!
});

The golden rule

In simple language, this is determined by how a function is called:

  • Called with obj.method()this is obj
  • Called alone fn()this is window (or undefined in strict mode)
  • Arrow function → this is whatever the parent’s this was
  • Called with newthis is the new instance
  • Called with call/apply/bindthis is whatever you pass in

If you remember just one thing — look at the left side of the dot when the function is called. That’s what this is.


18

Shallow Copy & Deep Copy

beginner objects arrays references
Shallow Copy
let a
let b = a
↓           ↓
[1, 2, 3, 4]
same memory
Deep Copy
let a
let b = [...a]
↓           ↓
[1, 2, 3, 4]
[1, 2, 3, 4]
different memory

Shallow Copy

The shallow copy of an object will refer to the same memory as the original array.

let a = [1,2,3,4] // Memory Address : 10225
let b = a;         // Memory Address : 10225 (same)
b.push(5)          // Changes in b
console.log(a)     // [1,2,3,4,5]

Deep Copy

The memory reference will not be the same when copying using a deep copy method. Using the Spread Operator or map:

// Using map
let a = [1,2,3,4]
let b = a.map((e) => e);
b.push(5)
console.log(a) // [1,2,3,4]
// Using spread operator
let a = [1,2,3,4]
let b = [...a];
b.push(5)
console.log(a) // [1,2,3,4]

Async JavaScript

19

Callbacks

beginner callbacks async callback-hell

A callback is simply a function that we pass as an argument to another function, and that function calls it back at some point. That’s it. The name literally means “call me back when you’re done.”

Synchronous Callbacks

We already use callbacks all the time without realizing it. Array methods like forEach, map, and filter all take callbacks.

const numbers = [1, 2, 3];

numbers.forEach(function(num) {
  console.log(num); // 1, 2, 3
});

const doubled = numbers.map(num => num * 2); // [2, 4, 6]

These are synchronous callbacks — they run immediately, one after another, in the same order.

Asynchronous Callbacks

The real power of callbacks comes with async operations. When we need to do something that takes time (like a timer, an API call, or reading a file), we pass a callback that runs after the operation is done.

console.log("Start");

setTimeout(function() {
  console.log("Timer done!"); // runs after 2 seconds
}, 2000);

console.log("End");
// Output: Start, End, Timer done!

The callback inside setTimeout doesn’t block the rest of our code. JavaScript moves on to the next line and comes back to run the callback when the timer finishes.

The problem: Callback Hell

Now imagine we need to do multiple async things in sequence — one after another. With callbacks, each next step has to be nested inside the previous one.

getUser(userId, function(user) {
  getOrders(user.id, function(orders) {
    getOrderDetails(orders[0].id, function(details) {
      getShippingInfo(details.trackingId, function(shipping) {
        console.log(shipping.status);
        // good luck reading this...
      });
    });
  });
});

This is called Callback Hell or the Pyramid of Doom. Notice how the code keeps moving to the right with each nested callback? It becomes:

  • Hard to read
  • Hard to debug
  • Hard to handle errors (each level needs its own error handling)
  • Hard to maintain

Here’s a more visual example with setTimeout:

setTimeout(function() {
  console.log("Step 1 done");
  setTimeout(function() {
    console.log("Step 2 done");
    setTimeout(function() {
      console.log("Step 3 done");
      setTimeout(function() {
        console.log("Step 4 done");
        // we're 4 levels deep already...
      }, 1000);
    }, 1000);
  }, 1000);
}, 1000);

Error handling is messy too

With callbacks, there’s no standard way to handle errors. The common convention (from Node.js) is “error-first callbacks” — the first argument is always an error.

readFile("data.json", function(error, data) {
  if (error) {
    console.log("Failed to read file:", error);
    return;
  }
  console.log(data);
});

But when we nest these, we need to check for errors at every single level. It gets painful fast.

Why Promises were invented

Callbacks work fine for simple one-off async operations. But for anything with multiple sequential steps, they create deeply nested, hard-to-read code. This is exactly why Promises were introduced in ES6 — they let us flatten the nesting and chain async operations in a clean, readable way. We’ll cover those next.

In simple language, a callback is just a function you hand to someone else and say “call this when you’re done.” They’re fundamental to how JavaScript handles async operations, but when you stack them too deep, the code becomes a nightmare. That’s when you reach for Promises.


20

Promises

intermediate promises async then catch

A Promise is an object that represents the eventual result of an asynchronous operation. Think of it like ordering food at a restaurant — you get a receipt (the promise) immediately, and the food (the result) comes later. The receipt can either be fulfilled (food arrives) or rejected (kitchen is closed).

Promise States

A Promise is always in one of three states:

Pending
initial state, waiting
Fulfilled
resolve() was called
or
Rejected
reject() was called
Once settled (fulfilled or rejected), a Promise can never change state again.

Creating a Promise

We create a Promise using the new Promise() constructor. It takes a function with two parameters: resolve (for success) and reject (for failure).

const myPromise = new Promise((resolve, reject) => {
  const success = true;

  if (success) {
    resolve("It worked!"); // fulfilled
  } else {
    reject("Something went wrong"); // rejected
  }
});

Consuming a Promise: .then(), .catch(), .finally()

  • .then(callback) — runs when the Promise is fulfilled
  • .catch(callback) — runs when the Promise is rejected
  • .finally(callback) — runs no matter what (fulfilled or rejected)
myPromise
  .then(result => console.log(result))   // "It worked!"
  .catch(error => console.log(error))     // runs if rejected
  .finally(() => console.log("Done!"));   // always runs

Promise Chaining

The real magic of Promises is chaining. Whatever we return from a .then() gets passed as the input to the next .then(). This flattens the callback hell into a clean chain.

getUser(userId)
  .then(user => getOrders(user.id))
  .then(orders => getOrderDetails(orders[0].id))
  .then(details => getShippingInfo(details.trackingId))
  .then(shipping => console.log(shipping.status))
  .catch(error => console.log("Something failed:", error));

Compare this with the callback hell version — night and day difference. Each .then() returns a new Promise, so we can keep chaining.

Error Propagation

One of the best things about Promises is that a single .catch() at the end catches errors from any step in the chain. If step 2 fails, it skips all remaining .then() calls and jumps straight to .catch().

fetchData()
  .then(data => processData(data))    // if this throws...
  .then(result => saveResult(result))  // ...this is skipped
  .then(() => console.log("Saved!"))   // ...this is skipped too
  .catch(error => {
    console.log("Caught:", error);     // ...and we land here
  });

Converting Callbacks to Promises

We can wrap any callback-based function in a Promise. This is a very useful pattern:

// Callback-based
function loadData(url, callback) {
  fetch(url, (err, data) => callback(err, data));
}

// Promise-based wrapper
function loadData(url) {
  return new Promise((resolve, reject) => {
    fetch(url, (err, data) => {
      if (err) reject(err);
      else resolve(data);
    });
  });
}

// Now we can use it with .then()
loadData("/api/users")
  .then(data => console.log(data))
  .catch(err => console.log(err));

Quick tips

  • A .then() can take two arguments: then(onFulfilled, onRejected) — but using .catch() is cleaner and more readable.
  • If we return a plain value from .then(), the next .then() gets it wrapped in a resolved Promise automatically.
  • If we return a Promise from .then(), the next .then() waits for it to settle.

In simple language, Promises give us a way to handle async operations without the nesting nightmare. We chain .then() calls for sequential steps and use a single .catch() at the end for errors. They were such a big improvement over callbacks that they became the foundation for async/await.


21

Async/Await

intermediate async await promises error-handling

async/await is syntactic sugar on top of Promises. It lets us write asynchronous code that looks and reads like synchronous code. Under the hood, it’s still Promises — just with a much cleaner syntax.

The basics

An async function always returns a Promise. The await keyword pauses execution inside the async function until the Promise resolves, and then gives us the resolved value.

async function getUser() {
  const response = await fetch("/api/user"); // pauses here until fetch completes
  const user = await response.json();        // pauses here until parsing completes
  return user;                                // automatically wrapped in a Promise
}

// Calling it
getUser().then(user => console.log(user));

Without async/await, the same code with Promises would look like:

function getUser() {
  return fetch("/api/user")
    .then(response => response.json())
    .then(user => user);
}

Both do the exact same thing. But the async/await version reads top-to-bottom like normal code.

Error Handling with try/catch

Instead of .catch(), we use try/catch — just like synchronous error handling.

async function getUser() {
  try {
    const response = await fetch("/api/user");
    const user = await response.json();
    console.log(user);
  } catch (error) {
    console.log("Failed to fetch user:", error);
  } finally {
    console.log("Done"); // always runs
  }
}

This is much nicer than chaining .then().catch() everywhere, especially when we have multiple await calls.

Sequential vs Parallel Execution

This is a very important concept and a common interview question.

Sequential (slow) — one after another

async function loadData() {
  const users = await fetchUsers();     // waits 2 seconds
  const posts = await fetchPosts();     // waits 2 more seconds
  const comments = await fetchComments(); // waits 2 more seconds
  // Total: ~6 seconds (each one waits for the previous)
}

Parallel (fast) — all at once with Promise.all

If the requests don’t depend on each other, we should fire them all at once:

async function loadData() {
  const [users, posts, comments] = await Promise.all([
    fetchUsers(),     // starts immediately
    fetchPosts(),     // starts immediately
    fetchComments()   // starts immediately
  ]);
  // Total: ~2 seconds (all run in parallel, we wait for the slowest)
}

Common mistake: await in forEach

This is a trap that catches a lot of people. forEach does not wait for async callbacks. It fires them all off and moves on.

// WRONG — these all fire at once, forEach doesn't wait
const ids = [1, 2, 3];
ids.forEach(async (id) => {
  const user = await fetchUser(id);
  console.log(user); // order is NOT guaranteed
});
console.log("Done"); // this runs BEFORE any user is fetched!

If we need sequential processing, use a for...of loop:

// CORRECT — processes one at a time, in order
for (const id of ids) {
  const user = await fetchUser(id);
  console.log(user); // guaranteed order
}
console.log("Done"); // this runs AFTER all users are fetched

If we want parallel processing but still need to wait for all of them:

// CORRECT — parallel processing, wait for all
const users = await Promise.all(ids.map(id => fetchUser(id)));
console.log(users); // all users, in order

async/await with arrow functions

We can use async with arrow functions too:

const getUser = async (id) => {
  const response = await fetch(`/api/users/${id}`);
  return response.json();
};

Key things to remember

  • async functions always return a Promise, even if we return a plain value
  • await can only be used inside an async function (or at the top level of a module)
  • await only pauses the current async function, not the entire program
  • Under the hood, everything after await goes to the Microtask Queue (just like .then())
  • For independent async operations, always use Promise.all() for better performance

In simple language, async/await lets us write async code that looks like regular synchronous code. It’s still Promises under the hood, but with a cleaner syntax. Use try/catch for errors, Promise.all for parallel operations, and never use await inside forEach.


22

Promise Methods

intermediate promises async Promise.all Promise.race

JavaScript gives us four static methods on the Promise object to handle multiple Promises at once. Each behaves differently — knowing when to use which is a common interview question.

Promise.all()
All must succeed
Waits for all to resolve. If any one rejects, the whole thing rejects immediately.
fail-fast
Promise.allSettled()
Wait for all, no matter what
Never rejects. Returns status + value/reason for each Promise.
never fails
Promise.race()
First to settle wins
Returns the result of whichever Promise settles first — whether it resolves or rejects.
fastest wins
Promise.any()
First to resolve wins
Ignores rejections. Only rejects if ALL Promises reject (AggregateError).
optimistic

Promise.all()

Takes an array of Promises and returns a single Promise that resolves with an array of all results. If any one rejects, the whole thing rejects immediately with that error.

Use case: When we need ALL results and any failure means we can’t proceed.

const [users, posts, settings] = await Promise.all([
  fetch("/api/users").then(r => r.json()),
  fetch("/api/posts").then(r => r.json()),
  fetch("/api/settings").then(r => r.json())
]);
// All three must succeed, or we get an error
// If posts API fails, we don't get users or settings either
// What happens on rejection
Promise.all([
  Promise.resolve("A"),
  Promise.reject("B failed!"),
  Promise.resolve("C")          // this still runs, but result is ignored
]).catch(err => console.log(err)); // "B failed!"

Promise.allSettled()

Waits for all Promises to settle (either resolve or reject). Never rejects itself. Returns an array of objects with status, value (if fulfilled), or reason (if rejected).

Use case: When we want to try everything and handle successes and failures individually.

const results = await Promise.allSettled([
  fetch("/api/users").then(r => r.json()),
  fetch("/api/posts").then(r => r.json()),
  fetch("/api/broken-endpoint").then(r => r.json())
]);

// results:
// [
//   { status: "fulfilled", value: [...users] },
//   { status: "fulfilled", value: [...posts] },
//   { status: "rejected", reason: Error("404") }
// ]

// We can handle each individually
results.forEach(result => {
  if (result.status === "fulfilled") {
    console.log("Got:", result.value);
  } else {
    console.log("Failed:", result.reason);
  }
});

Promise.race()

Returns the result of whichever Promise settles first — whether it resolves or rejects. The rest are ignored (but still run in the background).

Use case: Timeouts, picking the fastest response.

// Implementing a timeout for a fetch request
const result = await Promise.race([
  fetch("/api/data").then(r => r.json()),
  new Promise((_, reject) =>
    setTimeout(() => reject("Timeout!"), 5000)
  )
]);
// If fetch takes longer than 5 seconds, we get "Timeout!" error
// First to settle (even if it's a rejection)
Promise.race([
  new Promise(resolve => setTimeout(() => resolve("slow"), 2000)),
  new Promise((_, reject) => setTimeout(() => reject("fast error"), 500))
]).catch(err => console.log(err)); // "fast error" — rejection settled first

Promise.any()

Returns the result of the first Promise to resolve. It ignores rejections entirely. Only rejects if all Promises reject — with an AggregateError.

Use case: Trying multiple sources, we only need one to work.

// Try multiple CDN servers, use whichever responds first
const data = await Promise.any([
  fetch("https://cdn1.example.com/data.json"),
  fetch("https://cdn2.example.com/data.json"),
  fetch("https://cdn3.example.com/data.json")
]);
// Uses the first successful response, ignores any that fail
// All reject → AggregateError
Promise.any([
  Promise.reject("Error 1"),
  Promise.reject("Error 2"),
  Promise.reject("Error 3")
]).catch(err => {
  console.log(err);          // AggregateError: All promises were rejected
  console.log(err.errors);   // ["Error 1", "Error 2", "Error 3"]
});

Quick cheat sheet

MethodResolves whenRejects when
Promise.allALL resolveANY one rejects
Promise.allSettledALL settleNever rejects
Promise.raceFirst to settleFirst to settle (if it rejects)
Promise.anyFirst to resolveALL reject

In simple language: use all when every result matters, allSettled when you want to try everything regardless of failures, race when speed matters, and any when you just need one success from multiple attempts.


23

Error Handling

intermediate errors try-catch async debugging

Errors happen. APIs fail, users enter bad data, and things break. The key is to handle them gracefully so our app doesn’t crash and the user gets a useful message.

try/catch/finally

The try/catch block lets us attempt something and handle the error if it fails. The finally block runs no matter what.

try {
  const data = JSON.parse("not valid json");
} catch (error) {
  console.log("Parsing failed:", error.message);
} finally {
  console.log("This always runs"); // cleanup goes here
}

The finally block is perfect for cleanup — closing connections, hiding loading spinners, etc. It runs whether the try succeeded or failed.

The Error Object

When an error occurs, JavaScript creates an Error object with three useful properties:

  • message — the human-readable error description
  • name — the type of error (e.g., “TypeError”, “ReferenceError”)
  • stack — the full stack trace (which file, which line, the call chain)
try {
  undefined.foo;
} catch (error) {
  console.log(error.name);    // "TypeError"
  console.log(error.message); // "Cannot read properties of undefined"
  console.log(error.stack);   // full stack trace with file and line numbers
}

Throwing Errors

We can throw our own errors using throw. This is useful for validation and enforcing rules in our code.

function divide(a, b) {
  if (b === 0) {
    throw new Error("Cannot divide by zero");
  }
  return a / b;
}

try {
  divide(10, 0);
} catch (error) {
  console.log(error.message); // "Cannot divide by zero"
}

We can throw anything — a string, a number, an object — but it’s best practice to always throw an Error object so we get the stack trace.

Common Error Types

JavaScript has several built-in error types. Knowing what each one means helps with debugging:

  • TypeError — using a value the wrong way (calling non-function, accessing property of undefined)
  • ReferenceError — accessing a variable that doesn’t exist
  • SyntaxError — code has invalid syntax (missing bracket, bad token)
  • RangeError — a number is outside its valid range (array length -1, infinite recursion)
// TypeError
null.foo;                 // Cannot read properties of null
"hello"();                // "hello" is not a function

// ReferenceError
console.log(x);           // x is not defined

// RangeError
new Array(-1);             // Invalid array length

Custom Error Classes

For real applications, we often want our own error types so we can distinguish between different kinds of failures.

class ValidationError extends Error {
  constructor(field, message) {
    super(message);
    this.name = "ValidationError";
    this.field = field;
  }
}

class NotFoundError extends Error {
  constructor(resource) {
    super(`${resource} not found`);
    this.name = "NotFoundError";
  }
}

// Usage
try {
  throw new ValidationError("email", "Invalid email format");
} catch (error) {
  if (error instanceof ValidationError) {
    console.log(`${error.field}: ${error.message}`); // "email: Invalid email format"
  }
}

Using instanceof lets us catch specific error types and handle them differently.

Error Handling in Async Code

With Promises

Unhandled Promise rejections are one of the most common bugs. Always add a .catch().

fetch("/api/data")
  .then(res => res.json())
  .then(data => console.log(data))
  .catch(error => console.log("Request failed:", error));

With async/await

Wrap await calls in try/catch — it works exactly like synchronous error handling.

async function loadUser(id) {
  try {
    const response = await fetch(`/api/users/${id}`);
    if (!response.ok) {
      throw new Error(`HTTP ${response.status}`);
    }
    const user = await response.json();
    return user;
  } catch (error) {
    console.log("Failed to load user:", error.message);
    return null; // return a fallback
  }
}

Global error catching

For errors that slip through, we can use global handlers:

// Browser — catches unhandled errors
window.addEventListener("error", (event) => {
  console.log("Uncaught error:", event.message);
});

// Catches unhandled Promise rejections
window.addEventListener("unhandledrejection", (event) => {
  console.log("Unhandled rejection:", event.reason);
});

Best practices

  • Always use Error objects (not strings) so you get a stack trace
  • Catch errors at the level where you can actually handle them
  • Don’t swallow errors silently — at least log them
  • Use custom error classes in larger apps for better error categorization
  • In async code, always handle rejections — unhandled rejections will crash Node.js

In simple language, try/catch is our safety net. We wrap risky code in try, handle failures in catch, and do cleanup in finally. For async code, use try/catch with async/await or .catch() with Promises. Throw custom errors when we need our code to fail loudly with a clear message.


24

Event Loop

advanced async event-loop runtime

But first, JavaScript is single-threaded

JavaScript has only one Call Stack. That means it can only do one thing at a time. If a function is running, nothing else can run until that function is done.

But then think about it — if JS can only do one thing at a time, how does it handle things like API calls, timers, or file reads without freezing the entire page?

That’s where the browser (or Node.js) comes in. The browser provides something called Web APIs — these are separate threads that handle the heavy work outside of JavaScript. Things like setTimeout, fetch, addEventListener are not part of JavaScript itself, they are provided by the browser.

The complete picture

When we write async code, it goes through a cycle. Let’s understand each part:

  • Call Stack — Where our code actually runs. Functions get pushed on top when called, and popped off when done.
  • Web APIs — The browser handles timers, network requests, DOM events here. This runs in a separate thread.
  • Callback Queue (Macrotask Queue) — When a Web API is done (like a timer finishes), the callback is pushed here.
  • Microtask Queue — Where Promise callbacks (.then, .catch, .finally) and queueMicrotask go. This has higher priority than the Callback Queue.
  • Event Loop — A loop that keeps checking: “Is the Call Stack empty? If yes, pick tasks from the queues and push them to the stack.”
Call Stack
one thing at a time
↓ async calls go to
Web APIs
setTimeout, fetch, DOM
runs in browser threads
↓ when done, callbacks go to
Microtask Queue high priority
Promises, queueMicrotask
Callback Queue
setTimeout, setInterval
↑ Event Loop moves them to Call Stack

How the Event Loop actually works

The Event Loop follows a simple rule that keeps repeating:

  1. Run everything in the Call Stack until it’s empty
  2. Check the Microtask Queue — run ALL of them (drain it completely)
  3. Pick ONE task from the Callback Queue and push it to the Call Stack
  4. Go back to step 1

The important thing to remember is — all microtasks are drained before the next macrotask. This is why Promises always run before setTimeout.

Macrotasks vs Microtasks

Microtasks (high priority)
• Promise.then / catch / finally
• queueMicrotask()
• MutationObserver
• async/await (after await)
ALL run before next macrotask
Macrotasks (normal priority)
• setTimeout / setInterval
• setImmediate (Node.js)
• I/O operations
• UI rendering
ONE picked per Event Loop cycle

Example 1: The classic interview question

console.log("1"); // Synchronous — runs first

setTimeout(() => {
  console.log("2"); // Callback Queue (macrotask)
}, 0);

Promise.resolve().then(() => {
  console.log("3"); // Microtask Queue (higher priority)
});

console.log("4"); // Synchronous — runs first

// Output: 1, 4, 3, 2

Let’s walk through what happens step by step:

Step 1 console.log("1") → Call Stack → runs → prints 1
Step 2 setTimeout(cb, 0) → Call Stack → sent to Web API → timer done → cb goes to Callback Queue
Step 3 Promise.then(cb) → Call Stack → resolved → cb goes to Microtask Queue
Step 4 console.log("4") → Call Stack → runs → prints 4
Step 5 Call Stack is empty → Event Loop drains Microtask Queue → prints 3
Step 6 Microtask Queue empty → Event Loop picks from Callback Queue → prints 2
Output: 1432

Even though setTimeout is set to 0ms, the Promise callback runs before it because the Microtask Queue has higher priority than the Callback Queue.

Example 2: Nested microtasks

This is a tricky one. When a microtask creates another microtask, the Event Loop drains ALL of them before moving to the next macrotask.

setTimeout(() => console.log("1"), 0);

Promise.resolve().then(() => {
  console.log("2");
  Promise.resolve().then(() => console.log("3"));
});

console.log("4");

// Output: 4, 2, 3, 1

Why this order? 4 is synchronous so it runs first. Then the Event Loop picks microtask 2. While running 2, a new microtask 3 is created — the Event Loop drains that too before touching the Callback Queue. Finally, macrotask 1 runs.

Example 3: async/await is just Promises

A lot of people get confused by async/await, but it’s just syntactic sugar over Promises. Everything after await goes to the Microtask Queue.

async function foo() {
  console.log("1");       // runs immediately (synchronous)
  await Promise.resolve();
  console.log("2");       // this goes to Microtask Queue
}

foo();
console.log("3");

// Output: 1, 3, 2

In simple language, when JavaScript sees await, it pauses that function and goes back to run the rest of the code. The paused part resumes later from the Microtask Queue.

Common gotcha: setTimeout(fn, 0) is NOT immediate

A lot of people think setTimeout(fn, 0) means “run immediately”. But it doesn’t. It means “run this as soon as possible after the Call Stack is empty and all microtasks are done”. The 0ms is the minimum delay, not a guarantee.

console.log("start");

setTimeout(() => {
  console.log("timeout"); // this will ALWAYS run last
}, 0);

Promise.resolve().then(() => console.log("promise"));

console.log("end");

// Output: start, end, promise, timeout

No matter what, setTimeout(fn, 0) will always wait for all synchronous code and all microtasks to finish first.

References


25

Debouncing and Throttling

intermediate performance events optimization

The major difference between debouncing and throttling is that debounce calls a function when a user hasn’t carried out an event in a specific amount of time, while throttle calls a function at intervals of a specified amount of time while the user is carrying out the event.

For example, if we debounce a scroll function with a timer of 250ms, the function is only called if the user hasn’t scrolled in 250ms. If we throttle a scroll function with a timer of 250ms, the function is called every 250ms while the user is scrolling.

Debounce
Events:
← user stops
Function fires:
only once after pause
Throttle
Events:
Function fires:
at regular intervals

Debouncing

let input = document.getElementById('name');
let debounceValue = document.getElementById('debounce-value');

const updateDebounceValue = () => {
  debounceValue.innerHTML = input.value;
}

let debounceTimer;

const debounce = (callback, time) => {
  window.clearTimeout(debounceTimer);
  debounceTimer = window.setTimeout(callback, time);
};

input.addEventListener(
  "input",
  () => {
    debounce(updateDebounceValue, 500)
  },
  false
);

Throttling

let throttleTimer;

const throttle = (callback, time) => {
  if (throttleTimer) return;
  throttleTimer = true;
  setTimeout(() => {
    callback();
    throttleTimer = false;
  }, time);
}

window.addEventListener("scroll", () => {
  throttle(handleScrollAnimation, 250);
});

Objects & Prototypes

26

Objects

beginner objects properties Object.keys Object.freeze

Objects are everywhere in JavaScript. They’re collections of key-value pairs — we use them to group related data and functionality together.

Creating Objects

Object Literal (most common)

const user = {
  name: "Manish",
  age: 25,
  greet() {
    console.log(`Hi, I'm ${this.name}`);
  }
};

Constructor Function

function User(name, age) {
  this.name = name;
  this.age = age;
}
const user = new User("Manish", 25);

Object.create()

Creates a new object with a specified prototype:

const proto = { greet() { console.log(`Hi, I'm ${this.name}`); } };
const user = Object.create(proto);
user.name = "Manish";
user.greet(); // "Hi, I'm Manish"

Accessing Properties

Two ways — dot notation and bracket notation:

const user = { name: "Manish", "fav-color": "blue" };

// Dot notation — clean and simple
console.log(user.name); // "Manish"

// Bracket notation — required for special characters and dynamic keys
console.log(user["fav-color"]); // "blue"

const key = "name";
console.log(user[key]); // "Manish" — dynamic access

Use dot notation by default. Use brackets when the key has special characters, starts with a number, or comes from a variable.

Useful Object Methods

Object.keys(), Object.values(), Object.entries()

const user = { name: "Manish", age: 25, city: "Pune" };

Object.keys(user);    // ["name", "age", "city"]
Object.values(user);  // ["Manish", 25, "Pune"]
Object.entries(user);  // [["name", "Manish"], ["age", 25], ["city", "Pune"]]

// Handy for looping
for (const [key, value] of Object.entries(user)) {
  console.log(`${key}: ${value}`);
}

Object.freeze() vs Object.seal()

Both restrict modifications, but in different ways:

  • Object.freeze() — Can’t add, remove, or modify any properties. Fully locked.
  • Object.seal() — Can’t add or remove properties, but CAN modify existing ones.
const frozen = Object.freeze({ name: "Manish", age: 25 });
frozen.name = "Rahul"; // silently fails (or throws in strict mode)
frozen.city = "Pune";  // silently fails
console.log(frozen);   // { name: "Manish", age: 25 }

const sealed = Object.seal({ name: "Manish", age: 25 });
sealed.name = "Rahul"; // works! existing property can be modified
sealed.city = "Pune";  // fails — can't add new property
console.log(sealed);   // { name: "Rahul", age: 25 }

Note: Both are shallow — if a property is an object, the nested object can still be modified.

Object.assign() for Merging

Copies properties from source objects into a target object:

const defaults = { theme: "dark", lang: "en" };
const userPrefs = { theme: "light" };

const settings = Object.assign({}, defaults, userPrefs);
console.log(settings); // { theme: "light", lang: "en" }

// Modern alternative: spread operator
const settings2 = { ...defaults, ...userPrefs };
console.log(settings2); // { theme: "light", lang: "en" }

Later sources override earlier ones for the same key.

Computed Property Names

We can use expressions as object keys by wrapping them in brackets:

const field = "email";
const user = {
  name: "Manish",
  [field]: "manish@example.com",          // "email": "manish@example.com"
  [`${field}Verified`]: true               // "emailVerified": true
};

Property Shorthand

When the variable name matches the key name, we can skip the colon:

const name = "Manish";
const age = 25;

// Without shorthand
const user = { name: name, age: age };

// With shorthand — same thing
const user2 = { name, age };

In simple language, objects are just bags of key-value pairs. We create them with {}, access properties with dot or bracket notation, and use built-in methods like Object.keys(), Object.freeze(), and Object.assign() to work with them. They’re the foundation of almost everything in JavaScript.


27

Prototypal Inheritance

intermediate prototype inheritance prototype-chain __proto__

JavaScript doesn’t have classical inheritance like Java or C++. Instead, it uses prototypal inheritance — objects inherit directly from other objects. Every object has a hidden link to another object called its prototype.

Every object has a [[Prototype]]

When we create an object, JavaScript secretly links it to another object — its prototype. We can access this link using Object.getPrototypeOf(obj) or the older __proto__ property.

const user = { name: "Manish" };

console.log(Object.getPrototypeOf(user)); // Object.prototype
console.log(user.__proto__);               // same thing (older way)
console.log(user.__proto__ === Object.prototype); // true

The Prototype Chain

When we try to access a property on an object, JavaScript first looks at the object itself. If it doesn’t find it there, it goes up to the prototype. If it’s not there either, it goes to the prototype’s prototype, and so on — until it hits null.

Prototype Chain Lookup
dog instance
{ name: "Buddy", breed: "Lab" }
Step 1: look here first
↓ __proto__
Animal.prototype
{ speak() { ... } }
Step 2: if not found, check here
↓ __proto__
Object.prototype
{ toString(), hasOwnProperty(), ... }
Step 3: the base prototype
↓ __proto__
null
End of the chain — property not found → undefined
function Animal(name) {
  this.name = name;
}
Animal.prototype.speak = function() {
  console.log(`${this.name} makes a sound`);
};

const dog = new Animal("Buddy");
dog.breed = "Lab";

dog.breed;           // "Lab" — found on dog itself
dog.speak();         // found on Animal.prototype
dog.toString();      // found on Object.prototype
dog.randomProp;      // undefined — not found anywhere in the chain

Object.create() for Prototypal Inheritance

Object.create() creates a new object and sets its prototype to whatever we pass in:

const animal = {
  speak() {
    console.log(`${this.name} makes a sound`);
  }
};

const dog = Object.create(animal);
dog.name = "Buddy";
dog.speak(); // "Buddy makes a sound"

console.log(Object.getPrototypeOf(dog) === animal); // true

Constructor Functions and prototype

When we use new with a function, the created object’s __proto__ is set to the constructor’s prototype property.

function Person(name) {
  this.name = name;
}
Person.prototype.greet = function() {
  console.log(`Hi, I'm ${this.name}`);
};

const manish = new Person("Manish");
manish.greet(); // "Hi, I'm Manish"

// The chain:
// manish.__proto__ === Person.prototype  → true
// Person.prototype.__proto__ === Object.prototype  → true

Methods defined on prototype are shared across all instances — they exist only once in memory, not copied to each instance.

hasOwnProperty() vs in operator

When checking if a property exists, there’s an important difference:

  • hasOwnProperty() — only checks the object itself, NOT the prototype chain
  • in operator — checks the object AND the entire prototype chain
function Car(model) {
  this.model = model;
}
Car.prototype.wheels = 4;

const car = new Car("Civic");

car.hasOwnProperty("model");   // true — own property
car.hasOwnProperty("wheels");  // false — it's on the prototype

"model" in car;   // true — found on object
"wheels" in car;  // true — found on prototype chain

Use hasOwnProperty() when we want to check only the object’s own properties (like when iterating with for...in and we want to skip inherited ones).

for (const key in car) {
  if (car.hasOwnProperty(key)) {
    console.log(key); // only "model", not "wheels"
  }
}

In simple language, JavaScript doesn’t copy properties from parent to child. Instead, it creates a chain of links between objects. When we access a property, JS walks up this chain until it finds it (or reaches the end). This is prototypal inheritance — and it’s how everything in JavaScript works under the hood, including classes.


28

Classes

intermediate classes inheritance OOP ES6

Classes in JavaScript are syntactic sugar over prototypal inheritance. They don’t introduce a new inheritance model — they just give us a cleaner, more familiar syntax for doing what constructor functions and prototypes already did.

Basic Class Syntax

A class has a constructor method (called when we use new) and any number of methods:

class User {
  constructor(name, email) {
    this.name = name;
    this.email = email;
  }

  greet() {
    console.log(`Hi, I'm ${this.name}`);
  }

  getEmail() {
    return this.email;
  }
}

const user = new User("Manish", "manish@example.com");
user.greet(); // "Hi, I'm Manish"

Under the hood, greet and getEmail are added to User.prototype — exactly the same as the old User.prototype.greet = function() {} pattern.

Inheritance with extends and super

We use extends to create a child class and super to call the parent’s constructor or methods.

class Animal {
  constructor(name) {
    this.name = name;
  }

  speak() {
    console.log(`${this.name} makes a sound`);
  }
}

class Dog extends Animal {
  constructor(name, breed) {
    super(name); // MUST call super() before using this
    this.breed = breed;
  }

  speak() {
    console.log(`${this.name} barks`); // overrides parent method
  }

  info() {
    super.speak(); // calls parent's speak()
    console.log(`Breed: ${this.breed}`);
  }
}

const dog = new Dog("Buddy", "Labrador");
dog.speak(); // "Buddy barks"
dog.info();  // "Buddy makes a sound" then "Breed: Labrador"

Important: if a child class has a constructor, it must call super() before accessing this.

Static Methods

Static methods belong to the class itself, not to instances. We call them on the class, not on objects.

class MathHelper {
  static add(a, b) {
    return a + b;
  }

  static random(min, max) {
    return Math.floor(Math.random() * (max - min + 1)) + min;
  }
}

MathHelper.add(2, 3);       // 5
MathHelper.random(1, 10);   // random number between 1-10

// const m = new MathHelper();
// m.add(2, 3); // TypeError — add is not a method on instances

Use static methods for utility functions that don’t need instance data.

Private Fields (#field)

Private fields start with #. They can only be accessed from inside the class — not from outside, not even from subclasses.

class BankAccount {
  #balance; // private field

  constructor(owner, balance) {
    this.owner = owner;
    this.#balance = balance;
  }

  deposit(amount) {
    this.#balance += amount;
  }

  getBalance() {
    return this.#balance;
  }
}

const account = new BankAccount("Manish", 1000);
account.deposit(500);
console.log(account.getBalance()); // 1500
// console.log(account.#balance);  // SyntaxError — private field

Getters and Setters

Getters and setters let us define computed properties that look like regular property access but run a function underneath.

class Circle {
  constructor(radius) {
    this.radius = radius;
  }

  get area() {
    return Math.PI * this.radius ** 2;
  }

  get diameter() {
    return this.radius * 2;
  }

  set diameter(value) {
    this.radius = value / 2;
  }
}

const c = new Circle(5);
console.log(c.area);     // 78.54 — accessed like a property, no ()
console.log(c.diameter);  // 10

c.diameter = 20;           // calls the setter
console.log(c.radius);    // 10

Classes are just prototypes underneath

This is important to understand — classes don’t add anything new to the language. They’re just a nicer way to write the same prototype-based code:

class User {
  constructor(name) { this.name = name; }
  greet() { console.log(`Hi, ${this.name}`); }
}

// is essentially the same as:
function User(name) { this.name = name; }
User.prototype.greet = function() { console.log(`Hi, ${this.name}`); };

// Proof:
console.log(typeof User); // "function" — classes are functions!

In simple language, classes give us a clean and familiar way to create objects with shared methods, set up inheritance, and organize our code. But under the hood, it’s still the same prototype chain we covered before. Think of classes as a nicer outfit on prototypes.


29

Iterators: for...in vs for...of

intermediate iterators for-in for-of loops

JavaScript has two loop constructs that look almost identical but do very different things: for...in and for...of. Mixing them up is a common source of bugs and a popular interview question.

for…in — Iterates over Keys

for...in loops over the enumerable property names (keys) of an object. It gives us strings.

const user = { name: "Manish", age: 25, city: "Pune" };

for (const key in user) {
  console.log(key);        // "name", "age", "city"
  console.log(user[key]);  // "Manish", 25, "Pune"
}

It also walks up the prototype chain, which can give unexpected results:

function Person(name) { this.name = name; }
Person.prototype.species = "Human";

const p = new Person("Manish");

for (const key in p) {
  console.log(key); // "name", then "species" (from prototype!)
}

// To skip inherited properties:
for (const key in p) {
  if (p.hasOwnProperty(key)) {
    console.log(key); // only "name"
  }
}

for…of — Iterates over Values

for...of loops over the values of any iterable — arrays, strings, Maps, Sets, NodeLists, etc.

const colors = ["red", "green", "blue"];

for (const color of colors) {
  console.log(color); // "red", "green", "blue"
}

const name = "Manish";
for (const char of name) {
  console.log(char); // "M", "a", "n", "i", "s", "h"
}

The Key Difference

The simplest way to remember: in for keys, of for values.

const arr = ["a", "b", "c"];

for (const x in arr) {
  console.log(x);   // "0", "1", "2" — the indices (keys), as strings!
}

for (const x of arr) {
  console.log(x);   // "a", "b", "c" — the actual values
}

When to Use Which

Use for...in for objects — when we need to loop over an object’s properties:

const config = { theme: "dark", lang: "en", debug: false };
for (const key in config) {
  console.log(`${key} = ${config[key]}`);
}

Use for...of for arrays, strings, Maps, Sets — when we need the values:

const scores = [85, 92, 78, 95];
let total = 0;
for (const score of scores) {
  total += score;
}

Why not use for…in on arrays?

It works, but it’s a bad idea because:

  1. It gives us string indices, not numbers
  2. It iterates over all enumerable properties, not just array elements
  3. The order is not guaranteed (though modern engines do maintain it)
const arr = ["a", "b"];
arr.custom = "oops";

for (const key in arr) {
  console.log(key); // "0", "1", "custom" — includes non-index properties!
}

for (const val of arr) {
  console.log(val); // "a", "b" — only the actual array values
}

for…of and plain objects

Plain objects are not iterable by default, so for...of throws an error:

const user = { name: "Manish" };
// for (const val of user) {} // TypeError: user is not iterable

// Use Object.entries() to make it work:
for (const [key, val] of Object.entries(user)) {
  console.log(`${key}: ${val}`); // "name: Manish"
}

Symbol.iterator (brief mention)

An object is iterable if it has a Symbol.iterator method. Arrays, strings, Maps, and Sets all have it built in. We can make our own objects iterable by defining one, but that’s an advanced topic.

const arr = [1, 2, 3];
console.log(typeof arr[Symbol.iterator]); // "function" — arrays are iterable

const obj = { a: 1 };
console.log(typeof obj[Symbol.iterator]); // "undefined" — plain objects are not

In simple language: for...in gives us keys (and walks up the prototype chain), for...of gives us values (from iterables only). Use for...in for objects, for...of for arrays and other iterables. When in doubt, for...of is usually what you want.


30

Map & Set

intermediate Map Set WeakMap WeakSet collections

ES6 introduced Map and Set as specialized collection types. While we can do most things with plain objects and arrays, these have specific advantages that make them the better choice in certain situations.

Map

A Map is like an object, but with some important differences — it can have any type as a key (not just strings), it maintains insertion order, and it has a size property.

const map = new Map();

// Setting values — keys can be anything
map.set("name", "Manish");
map.set(42, "the answer");
map.set(true, "yes");

const objKey = { id: 1 };
map.set(objKey, "object as a key!");

// Getting values
map.get("name");    // "Manish"
map.get(42);        // "the answer"
map.get(objKey);    // "object as a key!"

// Other methods
map.has("name");    // true
map.delete(42);     // removes the entry
map.size;           // 3
map.clear();        // removes everything

Iterating over a Map

const userRoles = new Map([
  ["Manish", "admin"],
  ["Rahul", "editor"],
  ["Priya", "viewer"]
]);

for (const [name, role] of userRoles) {
  console.log(`${name} is ${role}`);
}

userRoles.forEach((role, name) => {
  console.log(`${name}: ${role}`);
});

Map vs Object — when to use which

FeatureMapObject
Key typesAny (object, number, boolean)Strings and Symbols only
OrderGuaranteed insertion orderNot guaranteed (mostly works, but not spec’d for all cases)
Sizemap.sizeObject.keys(obj).length
IterationDirectly iterableNeed Object.entries()
PerformanceBetter for frequent add/deleteBetter for static structures
PrototypeNo inherited keysHas prototype keys

Use Map when: keys aren’t strings, we need to frequently add/delete entries, we need the size, or we need guaranteed order.

Use Object when: keys are strings, the structure is mostly static (like config, JSON data), or we need JSON serialization.

Set

A Set is like an array, but every value must be unique. Duplicates are automatically ignored.

const set = new Set();

set.add(1);
set.add(2);
set.add(3);
set.add(2); // ignored — already exists
set.add(1); // ignored — already exists

console.log(set.size); // 3
console.log(set.has(2)); // true

set.delete(2);
console.log(set.size); // 2

Common use case: removing duplicates from an array

const numbers = [1, 2, 3, 2, 4, 1, 5, 3];
const unique = [...new Set(numbers)];
console.log(unique); // [1, 2, 3, 4, 5]

This one-liner is one of the most used Set patterns in real code.

Iterating over a Set

const tags = new Set(["javascript", "react", "node"]);

for (const tag of tags) {
  console.log(tag);
}

tags.forEach(tag => console.log(tag));

WeakMap

A WeakMap is like a Map, but with two big restrictions:

  1. Keys must be objects (no primitives)
  2. Keys are weakly referenced — if nothing else references the key object, it gets garbage collected along with its value
const weakMap = new WeakMap();

let user = { name: "Manish" };
weakMap.set(user, "some metadata");

console.log(weakMap.get(user)); // "some metadata"

user = null; // the object can now be garbage collected
// The entry in weakMap is automatically cleaned up

WeakMaps are not iterable — we can’t loop over them or get their size. This is by design, because entries can disappear at any time due to garbage collection.

Use case: Storing private data or metadata associated with objects without preventing garbage collection. Common in frameworks for caching DOM node data.

WeakSet

Same idea as WeakMap, but for unique values:

  1. Values must be objects
  2. Weakly referenced — garbage collected when no other reference exists
  3. Not iterable, no size property
const weakSet = new WeakSet();

let obj = { id: 1 };
weakSet.add(obj);
weakSet.has(obj); // true

obj = null; // object can be garbage collected

Use case: Tracking which objects have been “seen” or processed, without preventing them from being garbage collected.

const visited = new WeakSet();

function processNode(node) {
  if (visited.has(node)) return; // already processed
  visited.add(node);
  // ... process the node
}

Quick summary

  • Map — object with any key type, ordered, has .size. Use when keys aren’t strings or we need frequent add/delete.
  • Set — array with unique values only. Great for deduplication and membership checks.
  • WeakMap — Map with object-only keys that don’t prevent garbage collection. Use for metadata/caching.
  • WeakSet — Set with object-only values that don’t prevent garbage collection. Use for tracking objects.

In simple language, Map and Set fill the gaps that plain objects and arrays can’t cover well. Map for when we need non-string keys or care about size/order, Set for when we need unique values. The Weak variants are for when we want the garbage collector to clean up after us automatically.


DOM & Events

31

Events

beginner DOM events browser

JavaScript’s interaction with HTML is handled through events that occur when the user or the browser manipulates a page.

When the page loads, it is called an event. When the user clicks a button, that click too is an event. Other examples include events like pressing any key, closing a window, resizing a window, etc.

Developers can use these events to execute JavaScript coded responses, which cause buttons to close windows, messages to be displayed to users, data to be validated, and virtually any other type of response imaginable.

Events are a part of the Document Object Model (DOM) Level 3 and every HTML element contains a set of events which can trigger JavaScript Code.


32

Event Bindings

beginner DOM events

Whenever we want to bind any event with the HTML element then we use addEventListener() or onclick attribute of the HTML element. This is called Event Binding.

document.getElementById('button').addEventListener("click", () => {
  console.log("clicked");
});

document.getElementById('button').onclick = () => {
  console.log("clicked");
}

The main difference between the two is that addEventListener() allows multiple handlers on the same event, while onclick replaces the previous handler.

const btn = document.getElementById('button');

// addEventListener - both handlers will fire
btn.addEventListener("click", () => console.log("first"));
btn.addEventListener("click", () => console.log("second"));
// Click → "first" then "second"

// onclick - second handler replaces the first
btn.onclick = () => console.log("first");
btn.onclick = () => console.log("second");
// Click → "second" only

addEventListener() also accepts a third parameter for options like capture, once, and passive.

To remove a bound event, we use removeEventListener() with the same function reference. This is why we should store the handler in a variable instead of writing an anonymous function directly.

function handleClick() {
  console.log("clicked");
}

btn.addEventListener("click", handleClick);
btn.removeEventListener("click", handleClick); // works

btn.addEventListener("click", () => console.log("clicked"));
btn.removeEventListener("click", () => console.log("clicked")); // does NOT work, different reference

33

Event Propagation, Bubbling and Capturing

intermediate DOM events propagation

What is Event Propagation?

Event Propagation determines in which order the elements receive the event. When we click on a nested element, the event doesn’t just fire on that element — it travels through the entire DOM tree in three phases:

  1. Capturing Phase — the event goes from the top (window) down to the target element (parent to child)
  2. Target Phase — the event reaches the actual element that was clicked
  3. Bubbling Phase — the event goes back up from the target to the top (child to parent)

In simple language, when we click on something, the event first goes down the DOM tree, hits the target, and then comes back up. Most of the time we only care about the bubbling phase (which is the default behavior).

Capturing ↓



body
div
p
a ← click target
Bubbling ↑



Event Bubbling

The bubbling principle is simple. When an event happens on an element, it first runs the handlers on it, then on its parent, then all the way up on other ancestors.

Let’s say we have 3 nested elements FORM > DIV > P with a handler on each of them:

<form onclick="alert('form')">FORM
  <div onclick="alert('div')">DIV
    <p onclick="alert('p')">P</p>
  </div>
</form>

When clicking on <p> it will show alert() for p, then it will fire event of its parent that is <div> and so on.

To stop it, event.stopPropagation() is used:

<body onclick="alert(`the bubbling doesn't reach here`)">
  <button onclick="event.stopPropagation()">Click me</button>
</body>

Event Capturing

Reverse of bubbling. Events fire from parent to child.

document.getElementById("el").addEventListener(
  "click",
  () => {},
  true // <-- This enables capturing mode
);

34

Event Delegation

intermediate DOM events patterns

Event Delegation is basically a pattern to handle events efficiently. Instead of adding an event listener to each and every similar element, we can add an event listener to a parent element and call an event on a particular target using the .target property of the event object.

For example, if we have a list of 100 items, instead of adding 100 click listeners, we add one listener on the parent <ul> and use event.target to identify which <li> was clicked.

ul#todo-list ← listener here
li
Task 1
li
Task 2
li
Task 3
click on any li → event bubbles up → parent catches it via event.target
<ul id="todo-list">
  <li>Task 1</li>
  <li>Task 2</li>
  <li>Task 3</li>
</ul>
// Without delegation — adding listener to each item
document.querySelectorAll('#todo-list li').forEach(item => {
  item.addEventListener('click', function() {
    console.log(this.textContent);
  });
});

// With delegation — single listener on the parent
document.getElementById('todo-list').addEventListener('click', function(event) {
  if (event.target.tagName === 'LI') {
    console.log(event.target.textContent);
  }
});

The delegated approach is better because it uses only one event listener instead of many. And the best part is, if we add a new <li> to the list later, the click handler will automatically work for it without adding any new listener.

In simple language, event.target is the element that was actually clicked (the <li>), and event.currentTarget is the element where we attached the listener (the <ul>).


35

DOM Manipulation

beginner DOM elements browser manipulation

DOM manipulation is basically how JavaScript talks to the HTML page. We can select elements, change them, add new ones, or remove existing ones — all through JavaScript. Let’s go through each part.

Selecting Elements

These are the methods we use to grab elements from the page.

// By ID — returns a single element
const header = document.getElementById('header');

// By CSS selector — returns the FIRST match
const btn = document.querySelector('.btn-primary');

// By CSS selector — returns ALL matches (NodeList)
const items = document.querySelectorAll('.list-item');

// Loop through all matches
items.forEach(item => console.log(item.textContent));

In simple language, querySelector is like a Swiss army knife — it takes any CSS selector. Use getElementById when you have an ID, and querySelectorAll when you need multiple elements.

Creating Elements

We can create brand new elements from scratch and then add them to the page.

// Create a new element
const div = document.createElement('div');

// Create a text node
const text = document.createTextNode('Hello there!');

// Add the text to the div
div.appendChild(text);

Modifying Elements

Once we have an element, we can change pretty much anything about it.

const card = document.querySelector('.card');

// Change text (safe, no HTML parsing)
card.textContent = 'Updated text';

// Change HTML inside (parses HTML — be careful with user input)
card.innerHTML = '<strong>Bold text</strong>';

// Set attributes
card.setAttribute('data-id', '42');
card.setAttribute('role', 'article');

// classList — add, remove, toggle classes
card.classList.add('active');
card.classList.remove('hidden');
card.classList.toggle('dark-mode'); // adds if missing, removes if present

A quick note — prefer textContent over innerHTML when you’re just setting text. innerHTML parses HTML, which is slower and can be a security risk (XSS) if you’re inserting user input.

Inserting Elements

There are several ways to add elements to the page. The newer methods (append, prepend) are more flexible.

const list = document.querySelector('#todo-list');
const newItem = document.createElement('li');
newItem.textContent = 'New task';

// appendChild — adds at the end (returns the appended node)
list.appendChild(newItem);

// insertBefore — adds before a specific child
const firstItem = list.firstElementChild;
list.insertBefore(newItem, firstItem);

// append — adds at the end (can take strings too)
list.append('Some text', newItem);

// prepend — adds at the beginning
list.prepend(newItem);

The difference between appendChild and append is that append can take multiple arguments and can take plain strings, while appendChild only takes one Node.

Removing Elements

Two ways to remove an element — the modern way is much cleaner.

const item = document.querySelector('.old-item');

// Modern way — call remove() on the element itself
item.remove();

// Old way — ask the parent to remove the child
item.parentNode.removeChild(item);

remove() is supported in all modern browsers. The removeChild approach is the old-school way you’ll see in legacy code, but it does the same thing.


36

Web Storage & Cookies

beginner storage localStorage sessionStorage cookies browser

Browsers give us a few ways to store data on the client side. The three main ones are localStorage, sessionStorage, and cookies. They each have different lifetimes, sizes, and behaviors — let’s break them down.

localStorage
Persistence:
Forever (until cleared)
Size limit:
~5 MB
Sent to server:
No
Scope:
Same origin
sessionStorage
Persistence:
Until tab closes
Size limit:
~5 MB
Sent to server:
No
Scope:
Same origin + tab
Cookies
Persistence:
Until expiry date
Size limit:
~4 KB
Sent to server:
Yes, every request
Scope:
Same origin (configurable)

localStorage

Data stays even after we close the browser. It’s the most common one for things like user preferences, theme settings, or cached data.

// Store data (must be strings)
localStorage.setItem('theme', 'dark');
localStorage.setItem('user', JSON.stringify({ name: 'Manish' }));

// Read data
const theme = localStorage.getItem('theme'); // 'dark'
const user = JSON.parse(localStorage.getItem('user'));

// Remove one item
localStorage.removeItem('theme');

// Clear everything
localStorage.clear();

sessionStorage

Works exactly like localStorage, but the data is cleared when the tab is closed. If we open a new tab, it gets its own separate sessionStorage.

// Same API as localStorage
sessionStorage.setItem('formStep', '2');
const step = sessionStorage.getItem('formStep'); // '2'

// Gone when the tab is closed
sessionStorage.removeItem('formStep');

This is useful for temporary things like multi-step form data or a one-time banner that shouldn’t show again in the same session.

Cookies

Cookies are the oldest storage mechanism. The big difference is that cookies are sent to the server with every HTTP request — that’s why they’re used for authentication tokens. But the size limit is tiny (about 4KB).

// Set a cookie (expires in 7 days)
document.cookie = "username=Manish; expires=" +
  new Date(Date.now() + 7 * 86400000).toUTCString() + "; path=/";

// Read all cookies (returns one big string)
console.log(document.cookie); // "username=Manish; theme=dark"

// Delete a cookie (set expiry in the past)
document.cookie = "username=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/";

The cookie API is honestly terrible to work with. That’s why most people use a small library or write helper functions.

When to Use Which

  • localStorage — User preferences, theme, cached API data, anything that should survive a browser restart
  • sessionStorage — Temporary form data, one-time notifications, wizard/step state within a single tab
  • Cookies — Authentication tokens, session IDs, anything the server needs to see on every request

In simple language, if the server doesn’t need it, use localStorage or sessionStorage. If the server needs to read it on every request (like an auth token), use cookies.


ES6+ & Modern JS

37

Modules

beginner modules import export ES6 CommonJS

Before modules, all JavaScript code shared one global scope — which was a nightmare for large projects. Modules let us split code into separate files, each with its own scope, and explicitly share only what we want.

Named Exports and Imports

We can export multiple things from a file by name. When importing, we use the exact same names (inside curly braces).

// math.js
export const PI = 3.14159;
export function add(a, b) { return a + b; }
export function multiply(a, b) { return a * b; }

// app.js
import { PI, add, multiply } from './math.js';
console.log(add(2, 3)); // 5

We can also rename imports if there’s a name collision:

import { add as sum } from './math.js';
console.log(sum(2, 3)); // 5

Default Exports and Imports

Each file can have one default export. When importing a default, we don’t use curly braces and we can name it anything we want.

// logger.js
export default function log(msg) {
  console.log(`[LOG] ${msg}`);
}

// app.js — we can call it whatever we want
import log from './logger.js';
import myLogger from './logger.js'; // this works too

We can mix default and named exports in the same file, though it’s generally cleaner to pick one style.

// utils.js
export default function main() { /* ... */ }
export const VERSION = '1.0';

// app.js
import main, { VERSION } from './utils.js';

Re-exporting

When building a library or module with multiple files, we can re-export from an index file to create a clean public API.

// components/index.js
export { Button } from './Button.js';
export { Modal } from './Modal.js';
export { default as Card } from './Card.js';

// app.js — clean single import
import { Button, Modal, Card } from './components/index.js';

Dynamic Imports

Sometimes we don’t want to load a module until we actually need it. import() returns a Promise, so we can use it with await or .then().

// Load a heavy module only when the user clicks
button.addEventListener('click', async () => {
  const { Chart } = await import('./chart.js');
  const chart = new Chart('#canvas');
  chart.render(data);
});

This is great for code splitting — the browser doesn’t download the module until it’s needed, making the initial page load faster.

CommonJS vs ES Modules

This is a common interview question. CommonJS (require) is the old Node.js way. ES Modules (import/export) is the standard that works in both browsers and modern Node.js.

// CommonJS (Node.js traditional way)
const fs = require('fs');
module.exports = { myFunction };
module.exports = myFunction; // default-like export

// ES Modules (modern standard)
import fs from 'fs';
export { myFunction };
export default myFunction;

Key differences:

  • CommonJS loads modules synchronously at runtime. require can be called anywhere, even inside if blocks.
  • ES Modules are statically analyzed at build time. import must be at the top level (except dynamic import()).
  • CommonJS uses require() / module.exports. ES Modules use import / export.
  • In Node.js, use .mjs extension or set "type": "module" in package.json to use ES Modules.

In simple language, if we’re writing modern JavaScript (whether for the browser or Node.js), we should use ES Modules. CommonJS still works in Node.js but ES Modules are the future.


38

Optional Chaining & Nullish Coalescing

beginner optional-chaining nullish-coalescing ES2020 operators

These two operators (introduced in ES2020) solve very common pain points. Let’s look at each one.

Optional Chaining (?.)

Before optional chaining, accessing a deeply nested property was painful because we had to check every level to avoid “Cannot read property of undefined” errors.

const user = { address: { city: 'Mumbai' } };

// Old way — check every level
const city = user && user.address && user.address.city;

// With optional chaining — clean and short
const city2 = user?.address?.city; // 'Mumbai'

If any part in the chain is null or undefined, it short-circuits and returns undefined instead of throwing an error.

const user = {};
console.log(user?.address?.city);    // undefined (no error!)
console.log(user?.address?.city?.toUpperCase()); // undefined

Works with methods and arrays too

Optional chaining isn’t just for properties — we can use it with method calls and array access.

const user = { greet: null, scores: [10, 20] };

// Method calls — won't throw if method doesn't exist
user.greet?.();           // undefined (greet is null, not called)
user.sayHello?.();        // undefined (doesn't exist, no error)

// Array access
user.scores?.[0];         // 10
user.friends?.[0];        // undefined (friends doesn't exist)

Nullish Coalescing (??)

The ?? operator returns the right-hand side only when the left-hand side is null or undefined. Not when it’s 0, "", or false — that’s the key difference from ||.

const count = 0;

// || treats 0 as falsy, so it falls through
console.log(count || 10);   // 10 (not what we want!)

// ?? only checks for null/undefined
console.log(count ?? 10);   // 0 (correct!)

The Big Difference: ?? vs ||

This comes up in interviews all the time. || returns the right side for any falsy value (0, "", false, null, undefined, NaN). ?? returns the right side only for null and undefined.

console.log(0 || 'default');       // 'default' (0 is falsy)
console.log(0 ?? 'default');       // 0

console.log('' || 'default');      // 'default' (empty string is falsy)
console.log('' ?? 'default');      // ''

console.log(false || 'default');   // 'default'
console.log(false ?? 'default');   // false

console.log(null || 'default');    // 'default'
console.log(null ?? 'default');    // 'default' (same here)

console.log(undefined || 'default'); // 'default'
console.log(undefined ?? 'default'); // 'default' (same here)

In simple language, use ?? when 0, "", or false are valid values that you want to keep. Use || when you want to fall through on any falsy value.

Using Them Together

Optional chaining and nullish coalescing pair really well together — one safely accesses the value, the other provides a fallback.

const user = { settings: { theme: null } };

// Safely access + provide a default
const theme = user?.settings?.theme ?? 'light';
console.log(theme); // 'light' (theme is null, so ?? kicks in)

39

Symbols

advanced Symbol ES6 primitives iterators

Symbol is a primitive type introduced in ES6. Every Symbol is unique — even if two Symbols have the same description, they’re not equal. The main use case is creating property keys that are guaranteed to never collide with anything else.

Creating Symbols

const s1 = Symbol();
const s2 = Symbol();
console.log(s1 === s2); // false (every Symbol is unique)

// Description is just a label for debugging
const id = Symbol('id');
console.log(id.toString()); // 'Symbol(id)'
console.log(id.description); // 'id'

Note that we don’t use new Symbol() — Symbol is a primitive, not an object.

Symbols as Object Keys

This is the main use case. When we use a Symbol as a key, it won’t collide with any string key or any other Symbol key.

const ID = Symbol('id');
const user = {
  name: 'Manish',
  [ID]: 12345  // Symbol key — uses computed property syntax
};

console.log(user[ID]);    // 12345
console.log(user.name);   // 'Manish'

This is useful in libraries — if we add a Symbol property to a user’s object, we won’t accidentally overwrite any of their existing properties.

Symbols Are Not Enumerable

Symbols don’t show up in for...in, Object.keys(), or JSON.stringify(). This makes them great for “hidden” metadata properties.

const secret = Symbol('secret');
const obj = { visible: true, [secret]: 'hidden value' };

console.log(Object.keys(obj));          // ['visible']
console.log(JSON.stringify(obj));        // '{"visible":true}'
for (let key in obj) console.log(key);  // 'visible'

// But we CAN access them if we know how
console.log(Object.getOwnPropertySymbols(obj)); // [Symbol(secret)]

Symbol.for() — Global Symbol Registry

Symbol.for() creates a Symbol in a global registry. If a Symbol with that key already exists, it returns the same one. This is how we share Symbols across files or modules.

const s1 = Symbol.for('app.id');
const s2 = Symbol.for('app.id');
console.log(s1 === s2); // true (same Symbol from registry)

// Regular Symbol() always creates a new one
const s3 = Symbol('app.id');
console.log(s1 === s3); // false

Well-Known Symbols

JavaScript has built-in Symbols that let us customize how objects behave with language features. The most important ones:

Symbol.iterator

Makes an object iterable (usable in for...of loops and spread syntax).

const range = {
  from: 1,
  to: 3,
  [Symbol.iterator]() {
    let current = this.from;
    const last = this.to;
    return {
      next() {
        return current <= last
          ? { value: current++, done: false }
          : { done: true };
      }
    };
  }
};

console.log([...range]); // [1, 2, 3]
for (const n of range) console.log(n); // 1, 2, 3

Symbol.toPrimitive

Controls how an object is converted to a primitive value.

const money = {
  amount: 100,
  currency: 'INR',
  [Symbol.toPrimitive](hint) {
    if (hint === 'number') return this.amount;
    if (hint === 'string') return `${this.amount} ${this.currency}`;
    return this.amount; // default
  }
};

console.log(+money);        // 100
console.log(`${money}`);    // '100 INR'
console.log(money + 50);    // 150

In simple language, Symbols are like guaranteed-unique IDs. We use them when we need a property key that can never accidentally clash with anything else — and the well-known symbols let us hook into JavaScript’s own behavior.

References


40

Proxy & Reflect

advanced Proxy Reflect metaprogramming ES6

Proxy lets us wrap an object and intercept operations on it — like reading a property, setting a value, checking if a key exists, etc. Think of it as putting a guard in front of an object that can inspect and modify every interaction.

How Proxy Works

A Proxy takes two arguments: the target object and a handler with traps (functions that intercept operations).

const user = { name: 'Manish', age: 25 };

const proxy = new Proxy(user, {
  get(target, prop) {
    console.log(`Reading "${prop}"`);
    return target[prop];
  },
  set(target, prop, value) {
    console.log(`Setting "${prop}" to ${value}`);
    target[prop] = value;
    return true; // must return true for success
  }
});

proxy.name;          // logs: Reading "name" → 'Manish'
proxy.age = 26;      // logs: Setting "age" to 26

Common Handler Traps

Here are the traps we use most often:

  • get(target, prop) — reading a property
  • set(target, prop, value) — writing a property
  • has(target, prop) — the in operator
  • deleteProperty(target, prop) — the delete operator
  • apply(target, thisArg, args) — calling a function

Use Case: Validation Proxy

One of the most practical uses — we can validate values before they’re set on an object.

const validator = {
  set(target, prop, value) {
    if (prop === 'age') {
      if (typeof value !== 'number') throw TypeError('Age must be a number');
      if (value < 0 || value > 150) throw RangeError('Age must be 0-150');
    }
    target[prop] = value;
    return true;
  }
};

const person = new Proxy({}, validator);
person.name = 'Manish';  // works fine
person.age = 25;          // works fine
// person.age = -5;       // RangeError: Age must be 0-150
// person.age = 'old';    // TypeError: Age must be a number

Use Case: Logging Proxy

We can wrap any object to log every access — useful for debugging.

function withLogging(obj) {
  return new Proxy(obj, {
    get(target, prop) {
      console.log(`[GET] ${prop} → ${target[prop]}`);
      return target[prop];
    },
    set(target, prop, value) {
      console.log(`[SET] ${prop} = ${value}`);
      target[prop] = value;
      return true;
    }
  });
}

const config = withLogging({ debug: false, port: 3000 });
config.debug;         // [GET] debug → false
config.port = 8080;   // [SET] port = 8080

Reflect

Reflect is a built-in object that provides methods matching every Proxy trap. Instead of directly doing target[prop], we can use Reflect.get(target, prop) — it’s cleaner and always returns the correct default behavior.

const proxy = new Proxy(user, {
  get(target, prop, receiver) {
    console.log(`Accessing ${prop}`);
    return Reflect.get(target, prop, receiver); // proper default
  },
  set(target, prop, value, receiver) {
    console.log(`Setting ${prop}`);
    return Reflect.set(target, prop, value, receiver);
  },
  has(target, prop) {
    console.log(`Checking if "${prop}" exists`);
    return Reflect.has(target, prop);
  }
});

Why use Reflect instead of target[prop]? Because Reflect methods return success/failure booleans, handle edge cases with inheritance correctly (via the receiver parameter), and map 1-to-1 with every Proxy trap.

Real-World Usage

This isn’t just a theoretical concept. Vue.js 3 uses Proxy to power its reactivity system. When we change a reactive property, Vue’s Proxy trap detects the change and triggers a re-render. Before Vue 3, they used Object.defineProperty which had limitations (couldn’t detect new property additions or array index changes).

In simple language, Proxy is like a security guard for objects — every time someone tries to read, write, or check something on the object, the guard can inspect it, modify it, or block it. Reflect just gives us a clean way to do the “normal” thing inside those guard functions.


Patterns & Practice

41

Output-Based Questions

intermediate interview output hoisting coercion this typeof

These are the “What’s the output?” questions that come up in almost every JavaScript interview. For each one, try to guess the answer before reading the explanation.


1. var hoisting in loops with setTimeout

for (var i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 100);
}

Output: 3, 3, 3

Why: var is function-scoped, not block-scoped. There’s only one i variable shared across all iterations. By the time the setTimeout callbacks run, the loop has already finished and i is 3. If we change var to let, each iteration gets its own i and we’d get 0, 1, 2.


2. == type coercion traps

console.log([] == false);
console.log([] == ![]);
console.log('' == false);
console.log(0 == '');

Output:

true
true
true
true

Why: The == operator coerces both sides to the same type. [] == false — the array is coerced to "", then to 0, and false is 0, so 0 == 0 is true. [] == ![]![] is false (arrays are truthy), then it becomes [] == false, which we just covered. '' == false — both coerce to 0. 0 == '' — the empty string coerces to 0. This is exactly why we use === instead of ==.


3. this in different contexts

const obj = {
  name: 'Manish',
  greet: function() { console.log(this.name); },
  greetArrow: () => { console.log(this.name); }
};

obj.greet();
obj.greetArrow();

const fn = obj.greet;
fn();

Output:

Manish
undefined
undefined

Why: obj.greet() — regular function called on obj, so this is obj. obj.greetArrow() — arrow functions don’t have their own this, they use the this from the surrounding scope (which is the module/global scope, where name is undefined). fn() — we extracted the function and called it without any object, so this is the global object (or undefined in strict mode), and this.name is undefined.


4. Promise and setTimeout ordering

console.log('A');

setTimeout(() => console.log('B'), 0);

Promise.resolve().then(() => console.log('C'));

Promise.resolve().then(() => {
  console.log('D');
  setTimeout(() => console.log('E'), 0);
});

console.log('F');

Output: A, F, C, D, B, E

Why: Synchronous code runs first (A, F). Then the microtask queue is drained — Promise callbacks C and D run (microtasks have higher priority). While running D, a new setTimeout E is scheduled. Now the macrotask queue runs: B (scheduled first) then E. The rule is: all microtasks drain before the next macrotask.


5. typeof gotchas

console.log(typeof null);
console.log(typeof undefined);
console.log(typeof NaN);
console.log(typeof typeof 1);

Output:

object
undefined
number
string

Why: typeof null is "object" — this is a famous bug from the first version of JavaScript that was never fixed. typeof undefined is "undefined". typeof NaN is "number" — yes, “Not a Number” is technically a number type. typeof typeof 1 — inner typeof 1 returns the string "number", then typeof "number" returns "string".


6. Array method surprises

console.log([1, 2, 3].map(parseInt));

Output: [1, NaN, NaN]

Why: map passes three arguments to the callback: (element, index, array). So it actually calls parseInt(1, 0), parseInt(2, 1), parseInt(3, 2). parseInt(1, 0) — radix 0 is treated as radix 10, returns 1. parseInt(2, 1) — radix 1 is invalid (base-1 doesn’t exist), returns NaN. parseInt(3, 2) — radix 2 means binary, and 3 is not valid binary, returns NaN.


7. Object reference and comparison

const a = { x: 1 };
const b = { x: 1 };
const c = a;

console.log(a === b);
console.log(a === c);
console.log({ x: 1 } === { x: 1 });

Output:

false
true
false

Why: Objects are compared by reference, not by value. a and b look the same but they’re two different objects in memory — different references. c was assigned the same reference as a, so a === c is true. The last one creates two brand new objects — different references, so false.


8. Closure and setTimeout with let vs var

for (let i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 0);
}

for (var j = 0; j < 3; j++) {
  (function(j) {
    setTimeout(() => console.log(j), 0);
  })(j);
}

Output: 0, 1, 2, 0, 1, 2

Why: First loop — let creates a new binding for each iteration, so each callback captures its own i. Second loop — the IIFE creates a new scope for each iteration, capturing the current value of j as a parameter. Both are fixes for the classic var + setTimeout problem. The IIFE approach is the pre-ES6 solution, let is the modern one.


42

Polyfills

advanced polyfill interview array-methods bind implementation

A polyfill is a piece of code that implements a feature on browsers or environments that don’t natively support it. Interviewers love asking us to write polyfills because it tests whether we truly understand how these methods work under the hood — not just how to use them.

Array.prototype.myMap

map creates a new array by calling a callback on every element. The callback receives (element, index, array).

Array.prototype.myMap = function(callback, thisArg) {
  const result = [];
  for (let i = 0; i < this.length; i++) {
    if (i in this) { // skip holes in sparse arrays
      result.push(callback.call(thisArg, this[i], i, this));
    }
  }
  return result;
};

// Usage
const nums = [1, 2, 3];
console.log(nums.myMap(n => n * 2)); // [2, 4, 6]

Array.prototype.myFilter

filter creates a new array with only the elements where the callback returns true.

Array.prototype.myFilter = function(callback, thisArg) {
  const result = [];
  for (let i = 0; i < this.length; i++) {
    if (i in this && callback.call(thisArg, this[i], i, this)) {
      result.push(this[i]);
    }
  }
  return result;
};

// Usage
const nums = [1, 2, 3, 4, 5];
console.log(nums.myFilter(n => n % 2 === 0)); // [2, 4]

Array.prototype.myReduce

reduce is the trickiest one. It takes a callback and an optional initial value. If no initial value is provided, the first element is used as the accumulator and iteration starts from index 1.

Array.prototype.myReduce = function(callback, initialValue) {
  let accumulator;
  let startIndex = 0;

  if (initialValue !== undefined) {
    accumulator = initialValue;
  } else {
    if (this.length === 0) throw new TypeError('Reduce of empty array with no initial value');
    accumulator = this[0];
    startIndex = 1;
  }

  for (let i = startIndex; i < this.length; i++) {
    if (i in this) {
      accumulator = callback(accumulator, this[i], i, this);
    }
  }
  return accumulator;
};

// Usage
const nums = [1, 2, 3, 4];
console.log(nums.myReduce((sum, n) => sum + n, 0)); // 10
console.log(nums.myReduce((sum, n) => sum + n));     // 10 (no initial value)

Function.prototype.myBind

bind creates a new function with this locked to a specific value. It can also pre-fill arguments (partial application). This one is probably the most asked polyfill in interviews.

Function.prototype.myBind = function(thisArg, ...boundArgs) {
  const fn = this;
  return function(...callArgs) {
    return fn.apply(thisArg, [...boundArgs, ...callArgs]);
  };
};

// Usage
const user = { name: 'Manish' };

function greet(greeting, punctuation) {
  return `${greeting}, ${this.name}${punctuation}`;
}

const greetManish = greet.myBind(user, 'Hello');
console.log(greetManish('!'));  // 'Hello, Manish!'
console.log(greetManish('.')); // 'Hello, Manish.'

The key things to remember about bind: it returns a new function (doesn’t call it immediately), it permanently sets this, and any arguments passed to bind are prepended to the arguments passed when the bound function is called.

In simple language, writing polyfills shows the interviewer that we understand what’s happening inside these methods — the loop, the callback signature, the this context, and edge cases like sparse arrays or missing initial values.


43

Design Patterns

advanced design-patterns module singleton observer factory

Design patterns are reusable solutions to common problems. We don’t need to memorize all of them, but knowing the four most common ones in JavaScript will come up in interviews and help us write better code.

Module Pattern

Uses closures to create private variables and expose only a public API. Before ES Modules existed, this was the go-to way to avoid polluting the global scope.

const Counter = (function() {
  // Private — can't be accessed from outside
  let count = 0;

  // Public API
  return {
    increment() { count++; },
    decrement() { count--; },
    getCount() { return count; }
  };
})();

Counter.increment();
Counter.increment();
console.log(Counter.getCount()); // 2
console.log(Counter.count);      // undefined (private!)

The IIFE runs once and returns an object. The returned methods form a closure over count, so they can access it but nobody else can. jQuery used this pattern extensively.

Singleton Pattern

Ensures only one instance of something exists. Useful for things like a database connection, a logger, or a global store.

const Database = (function() {
  let instance;

  function createInstance() {
    return {
      host: 'localhost',
      query(sql) { console.log(`Running: ${sql}`); }
    };
  }

  return {
    getInstance() {
      if (!instance) {
        instance = createInstance();
      }
      return instance;
    }
  };
})();

const db1 = Database.getInstance();
const db2 = Database.getInstance();
console.log(db1 === db2); // true (same instance)

The first call to getInstance() creates the object. Every subsequent call returns that same object. Redux store is a singleton — there’s only one store for the entire app.

Observer Pattern

Also known as Pub/Sub (publish/subscribe). One object (the subject) maintains a list of dependents (observers) and notifies them when something changes.

function createEventEmitter() {
  const listeners = {};

  return {
    on(event, callback) {
      if (!listeners[event]) listeners[event] = [];
      listeners[event].push(callback);
    },
    emit(event, data) {
      (listeners[event] || []).forEach(cb => cb(data));
    },
    off(event, callback) {
      if (!listeners[event]) return;
      listeners[event] = listeners[event].filter(cb => cb !== callback);
    }
  };
}

const emitter = createEventEmitter();
const handler = data => console.log('Got:', data);

emitter.on('message', handler);
emitter.emit('message', 'Hello!');  // Got: Hello!
emitter.off('message', handler);
emitter.emit('message', 'Hello?');  // (nothing — handler was removed)

This pattern is everywhere. addEventListener in the browser is an observer. Node.js EventEmitter, RxJS, and Vue’s reactivity system all use variations of this pattern.

Factory Pattern

A function that creates and returns objects without using new. Useful when we need to create many similar objects with slight variations.

function createUser(name, role) {
  return {
    name,
    role,
    permissions: role === 'admin'
      ? ['read', 'write', 'delete']
      : ['read'],
    describe() {
      return `${this.name} (${this.role})`;
    }
  };
}

const admin = createUser('Manish', 'admin');
const viewer = createUser('Pika', 'viewer');

console.log(admin.permissions); // ['read', 'write', 'delete']
console.log(viewer.permissions); // ['read']

The factory hides the creation logic. The caller doesn’t care how the object is built — they just say what they want. React.createElement() is a factory. Express’s express() call is also a factory that creates an app instance.

Quick Reference

PatternCore IdeaReal-World Example
ModulePrivate state via closuresjQuery, old-school JS libraries
SingletonOne instance, global accessRedux store, DB connections
ObserverSubscribe to changesaddEventListener, Node EventEmitter
FactoryFunction creates objectsReact.createElement, express()

In simple language, these patterns aren’t rules we have to follow — they’re proven solutions that people have found useful over time. Knowing them helps us recognize patterns in existing code and write cleaner solutions ourselves.