Understanding Sound for Music Producers

A Soft Synced Companion Guide

Soft Synced Environment → Beginner Track → The Basics Course → Lesson 1

Introduction

In the app, you learned that sound is vibration shaped by frequency and amplitude. This guide expands on those basics and connects them directly to production. At Soft Synced, we believe producers aren’t just technicians, they’re translators, turning physics into feeling. Think of this guide as your reference layer: the details and visuals that don’t fit in the app, but that you’ll return to whenever you mix, design sounds, or troubleshoot recordings.

Sound = Vibration

Sound is air molecules moving back and forth, pressure changes we perceive as tone, noise, or music. But for a producer, vibration is raw clay. Every EQ move, every synth patch, every mix decision is shaping vibration into expression.

Two instruments can play the same note at the same volume and still sound different. The unique mix of overtones above the fundamental is what gives each sound its “voice.”

Timbre (pronounced “tam-ber”) is the unique quality or “color” of a sound that allows us to distinguish between two instruments or voices playing the same pitch at the same loudness. A piano and a violin both playing the note A4 will sound different because of their timbre. What Shapes timbre is the relative balance of harmonics, the attack and decay of the sound, and even noise or inharmonic content.

In short, Timbre is how a sound feels to your ear, its character. Harmonics are what makes up that character such as the overtones stacked above the fundamental.

Frequency → Pitch

Frequency is the speed of vibration, measured in Hertz (Hz). Faster vibrations = higher pitch, slower = lower.

  • Octaves: Doubling the frequency raises the note by one octave (220 Hz → 440 Hz → 880 Hz are all “A”).

  • Frequency Ranges:

    • Bass (20–250 Hz): subs, kick, bass guitar

    • Mids (250 Hz–4 kHz): vocals, guitars, keys

    • Treble (4–20 kHz): cymbals, air, sparkle

See the instrument frequency chart below for reference.


Instrument Frequency Chart

A comprehensive visualization of the frequency ranges of musical instruments across different families

20Hz
50Hz
100Hz
200Hz
500Hz
1kHz
2kHz
5kHz
10kHz
20kHz

Percussion

Piccolo
Piccolo: 600Hz-4.2kHz
Glockenspiel
Glockenspiel: 500Hz-4.2kHz
Xylophone
Xylophone: 250Hz-2.1kHz
Marimba
Marimba: 90Hz-1kHz

Brass

Tuba
Tuba: 60Hz-350Hz
French Horn
French Horn: 90Hz-900Hz
Trombone
Trombone: 80Hz-600Hz
Trumpet
Trumpet: 150Hz-1kHz
Bassoon
Bassoon: 60Hz-650Hz

Woodwinds

Baritone Sax
Baritone Sax: 110Hz-900Hz
Tenor Sax
Tenor Sax: 150Hz-1kHz
Alto Sax
Alto Sax: 140Hz-900Hz
Clarinet
Clarinet: 150Hz-2kHz
Oboe
Oboe: 250Hz-1.6kHz
Flute
Flute: 250Hz-2.1kHz

Strings

Bass Guitar
Bass Guitar: 40Hz-400Hz
Guitar
Guitar: 80Hz-1.3kHz
Bass
Bass: 40Hz-400Hz
Cello
Cello: 70Hz-1kHz
Viola
Viola: 130Hz-1.2kHz
Violin
Violin: 200Hz-3.1kHz

Voice & Keys

Harpsichord
Harpsichord: 80Hz-1.6kHz
Harp
Harp: 30Hz-3.5kHz
Piano
Piano: 30Hz-4.2kHz
Voice
Voice: 80Hz-1kHz
Pipe Organ
Pipe Organ: 20Hz-8kHz
20Hz
50Hz
100Hz
200Hz
500Hz
1kHz
2kHz
5kHz
10kHz
20kHz

Amplitude → Loudness

Amplitude is the size of a vibration and how much air is being moved. We perceive it as loudness.

  • Perception vs. Physics: Doubling amplitude doesn’t feel twice as loud. Human ears are most sensitive in the mids, less so at the extremes.

  • In Practice: Every fader, meter, compressor, and limiter in your DAW is an amplitude tool.

[Visual idea: DAW channel strip with a fader and meter.]

Waves Interacting

When waves combine, they either reinforce or cancel each other.

  • In Phase: Peaks and troughs line up → louder sound.

  • Out of Phase: Peaks oppose troughs → cancellation or thinness.

  • In Practice: Layer two snares slightly out of phase and the body disappears. Flip polarity or nudge timing and the punch comes back.

Producer FAQs

  • Even though they can play the same pitch at the same volume, each instrument produces a different mix of vibrations. A piano string vibrates in complex ways, creating overtones that give it warmth and body. A synth might generate a smooth sine wave with no overtones, or a buzzy saw wave with lots of them. That difference is what we call timbre — the “voice” of a sound. As a producer, this matters because the instruments you choose will blend or clash depending on how their timbres interact. Layering a piano and a synth isn’t just about volume; it’s about how their unique fingerprints combine to make something new.

  • Most people can hear down to about 20-25 Hz. Anything below that is more about feeling than hearing — that chest-shaking sub-bass at a concert isn’t loud in the usual sense, it’s your body responding to vibrations. Understanding this helps you shape the emotional impact of your music. Too much deep rumble can make a mix muddy, but the right amount of sub can make a track hit physically. Knowing where human hearing begins and ends lets you focus your sound design where it actually matters.

  • In digital audio, the ceiling is 0 dB. Push past it, and your sound distorts in an ugly way. That’s why producers leave “headroom” — space below the maximum level. Keeping your tracks comfortably below the ceiling means you can add effects, balance instruments, and finish your mix without worrying about everything falling apart. Think of it like cooking: if you fill a pot to the very top, there’s no room to stir or add ingredients without spilling. Headroom keeps your mix flexible and clean.

  • When two sounds don’t line up, their waves can cancel each other out. Imagine pouring two waves into a pool: if the peaks hit together, they splash bigger; if one peak hits a dip, they flatten each other. In music, this can make a kick drum suddenly lose its weight, or two layered vocals sound hollow. It’s not always obvious to beginners because nothing “looks” wrong in your DAW — but if something disappears or sounds weak when it should sound strong, phase might be the reason. Knowing this early gives you a huge advantage when layering sounds.

Quick Reference

Frequency Ranges

  • Kick: 50–100 Hz (body), 2–4 kHz (click).

  • Snare: 150–250 Hz (body), 5–8 kHz (snap).

  • Vocals: 100 Hz–1 kHz (core), 6–10 kHz (air).

  • Hi-hats: 6–12 kHz (sheen).

    These ranges are starting points. Every mix is different, but knowing where sounds usually sit helps you carve space and avoid clashing parts.

Amplitude facts:

  • Doubling amplitude ≠ double loudness.

  • The ear is most sensitive in the mids.

  • Peaks can measure loud without feeling loud.

  • Good mixes balance impact with comfort.

  • Contrast in dynamics creates interest.

    Amplitude isn’t just “volume” — it’s how your track breathes, how transients punch, and how quiet parts make the loud parts feel powerful.

Tools by dimension:

  • Frequency tools: EQ, filters, spectrum analyzers shape tone and clarity.

  • Amplitude tools: faders, compressors, limiters control balance and punch.

  • Combined tools: saturation and dynamics often affect both dimensions.

    Every move in your DAW touches one or both of these areas, and great producers constantly shift between them to shape emotion and energy in their mix.

Next Steps

The app gave you the core idea: sound = vibration, shaped by frequency and amplitude. This guide showed how that truth plays out in real music production. But don’t lose sight of the bigger picture: you’re not just learning audio terms. You’re learning how sound itself becomes story, emotion, and energy.

To continue, head back into the Soft Synced app and move on to Lesson 2, where we make digital audio make sense.

The Guides are your reference. The app is your journey.