<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Evaluation on Juntak Noh — AI Notes</title>
    <link>https://ai.klavierhye.cc/tags/evaluation/</link>
    <description>Recent content in Evaluation on Juntak Noh — AI Notes</description>
    <generator>Hugo -- 0.147.7</generator>
    <language>en</language>
    <lastBuildDate>Thu, 19 Feb 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://ai.klavierhye.cc/tags/evaluation/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Did Fine-Tuning Actually Help? Evaluating and Benchmarking Whisper for Korean STT</title>
      <link>https://ai.klavierhye.cc/posts/whisper-evaluation/</link>
      <pubDate>Thu, 19 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://ai.klavierhye.cc/posts/whisper-evaluation/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is &lt;strong&gt;Part 3&lt;/strong&gt; of a three-part series on fine-tuning Whisper for Korean speech-to-text: Preprocess → Train → &lt;strong&gt;Evaluate&lt;/strong&gt;. Here we measure whether the fine-tuned model actually improved, and by how much. &lt;a href=&#34;https://ai.klavierhye.cc/posts/whisper-preprocessing/&#34;&gt;Part 1&lt;/a&gt; covered preprocessing; &lt;a href=&#34;https://ai.klavierhye.cc/posts/whisper-training/&#34;&gt;Part 2&lt;/a&gt; covered training.&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;A trained model without evaluation is just a checkpoint on disk. You can stare at the training loss curve and hope it went down, but until you run the model on held-out data and measure something concrete — CER, WER, per-category breakdowns — you don&amp;rsquo;t know if the fine-tuning worked, whether it regressed on certain domains, or how it compares to the baseline you started from.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
