Most teams open the migration project with two hopes: less JavaScript on the wire, and a visible lift in Core Web Vitals (especially LCP and INP). Those outcomes are possible, but they are not automatic. Vue 3 removes a lot of compatibility weight, yet your largest wins often come from what ships around the framework: route-level code splitting, image strategy, and server or CDN configuration.
This article is about the metrics we look at first after a Vue 2 to Vue 3 cutover, the surprises we still see in 2026, and how to report performance to stakeholders without over-promising a single number on a slide deck.
1. Start with field data, not just the build report
A rollup-plugin-visualizer treemap (or the Vite equivalent) is still essential—it shows which dependencies dominate the graph. But Core Web Vitals are measured on real users. After go-live, we line up CrUX (if you have enough traffic) or RUM you already run (DataDog, Sentry Performance, etc.) with at least a two-week window before/after the release.
Lab tools like Lighthouse remain useful for debugging, but they are a spot check. The pattern we watch for: lab scores improve while LCP p75 in the field is flat. That almost always means the critical path is not the framework—it is hero images, third-party scripts, or server time to first byte.
2. INP often moves before LCP
Interaction to Next Paint (INP) is sensitive to long tasks on the main thread. Vue 3’s reactivity and smaller runtime can reduce work per update, and moving off legacy Babel targets can help too—especially if you were shipping large polyfill bundles to support aging browsers for Vue 2.
LCP, by contrast, is frequently dominated by the largest image or font and by when your app becomes interactive enough to render it. Do not be surprised if INP improves while LCP needs a separate pass on assets and critical CSS. If you are also changing the build tool ( Webpack to Vite, for example), confirm you did not accidentally change chunk boundaries in a way that delays the LCP image request.
3. The “surprise” we still see: total JS down, TBT in lab still high
Total kilobytes can drop while Total Blocking Time in Lighthouse stays elevated. Common causes: one vendor bundle (chat, analytics, A/B) still executing a long script during startup; synchronous JSON inlined in HTML; or a large synchronous import on a layout component that every route loads.
The fix is usually route-level or feature-level lazy loading, not another pass over Vue itself. This is where a migration checklist helps: verify dynamic imports for admin-only and modal-only chunks after the framework work is stable.
4. How we report results to the business
We try to lead with one field metric the executive cares about (usually LCP p75 on a key template) and one engineering metric (main bundle size or JS parse time) so both sides see a coherent story. We avoid “the site is faster” without a labeled chart.
- Before/after for the same week-over-week period to control for seasonality
- Segment by connection type or country if traffic is global
- Call out confounders (marketing added pixels, a new hero image, a pricing page redesign)
5. A measurement stack we actually run
There is no single tool that answers “did Vue 3 make us faster?” cleanly. The setup we use on most engagements is a small, boring combination of build-time, lab, and field tools that each answer a different question. The trick is wiring them up before you cut over so the “after” is comparable.
Build-time
rollup-plugin-visualizerorvite-bundle-visualizerfor treemaps; we save the JSON output per releasesize-limitwith a budget per route entry, wired into CI so a regression fails the PR- Source map explorer when a chunk grows and you need to find the new tenant (often a transitive dependency, not your code)
Lab
- Lighthouse CI on at least three template URLs (home, a long-tail content page, an authenticated dashboard)
- WebPageTest for waterfalls when LCP regresses and you need to see request order across CDN, app, and third parties
Field
- CrUX (via PageSpeed Insights or BigQuery) for p75 LCP, INP, CLS by URL group
- RUM in your existing stack (Sentry, DataDog, New Relic) so you can segment by browser, country, and release
- A simple
web-vitalsshim that posts to your analytics endpoint when CrUX coverage is thin
6. Common surprises after the cutover
Even on clean migrations, a few patterns repeat. None of them mean the upgrade was a mistake; they just remind you that bundle size is one input to user-perceived performance, not the whole story.
| Symptom | Likely cause | Where to look |
|---|---|---|
| JS bundle ↓ 25%, LCP unchanged | LCP element is an image, not a script | Hero <img>, fetchpriority, CDN cache |
| INP improved, CLS got worse | New skeleton layouts shifted post-hydrate | Reserved space for async slots |
| Vendor chunk grew on Vue 3 | UI library not yet tree-shakable | Vuetify/Element imports, side-effect flags |
| TBT in lab high, field INP fine | Lab CPU throttle is harsher than the median user | Trust field p75 over lab worst case |
| First load fast, second navigation slow | Route-level prefetch lost during split | Vue Router 4 prefetch |
7. A baseline budget that survives quarters
Most teams skip writing down a JavaScript budget because it feels arbitrary. It is, a little. But arbitrary numbers in CI still beat zero numbers in CI. We usually start with three budgets per template that are tracked release-over-release:
// .size-limit.json
[
{ "path": "dist/assets/entry-home-*.js", "limit": "60 KB" },
{ "path": "dist/assets/entry-app-*.js", "limit": "180 KB" },
{ "path": "dist/assets/vendor-*.js", "limit": "220 KB" }
]Numbers are gzip-compressed. Pick yours from a measured baseline plus 10% headroom; tighten quarterly. Pair budgets with a rule that any PR exceeding them must include a one-line justification on the PR description. That single discipline tends to keep bundles within range across teams who never otherwise talk about performance.
8. Anti-patterns we see post-migration
- “We removed Vue 2 polyfills, ship it.” Many apps still target browserslist defaults that include IE-era polyfills. Audit
browserslisttogether with the upgrade. - Global imports on shared layouts. A single
import dayjs from 'dayjs'inApp.vuecan pull plugins into every route. Co-locate inside the component that needs it. - Storing performance data only in lab. Field data is the source of truth. Lab is for repro, not reporting.
- Comparing different weeks. Always align Tuesday-to-Tuesday over the same campaign or promotion window when comparing pre- and post-migration.
- Treating Vite as a free win. Build tool choice matters, but if your app is image-heavy or third-party heavy, see our Webpack-to-Vite mid-migration notes before claiming credit for runtime wins.
9. FAQ: questions stakeholders actually ask
How much smaller will my bundle be after Vue 3?
In our engagements, the Vue runtime itself drops noticeably, but the meaningful number is the route-level entry. We often see 10–25% reductions when teams also drop unused plugins, switch to Pinia, and trim a heavy date or charting library. Some apps see no change because the framework was never the dominant cost.
Will Core Web Vitals improve immediately?
INP usually improves first if your old app suffered from heavy reactive watchers or unnecessary re-renders. LCP generally needs a separate asset pass. Plan a follow-up sprint to measure and tune; do not promise day-one numbers.
Should we add SSR to improve LCP?
Sometimes. SSR or pre-rendering helps when the LCP element depends on data fetched after hydration. It also adds operational cost. Our SSR + hydration deep-dive covers when the trade-off pays off.
How long should we wait before reporting numbers?
At least 14 days of CrUX data after a stable release. Anything less is noise. Mention the window in every report.
10. A short post-migration checklist
- Lock a baseline before cutover (CrUX + lab + bundle JSON)
- Add
size-limitbudgets to CI - Verify route-level code splitting per top template
- Audit
browserslistand drop legacy targets - Move heavy globals (date, charts, markdown) into lazy chunks
- Add
fetchpriority="high"on the LCP image - Pre-connect to required third parties; defer the rest
- Re-baseline budgets quarterly so wins compound
If you are still wrestling with state management or test stability while reading bundle reports, that is normal. See our full Vue 3 migration checklist and our notes on Cypress component tests after Vue 3 for sequencing.
Want a second pair of eyes on your post-migration numbers?
We help teams interpret build output, RUM, and release timing so performance work funds the right follow-up—whether that is image pipeline, server-side rendering, or another migration phase.
Conclusion
Vue 3 is a real opportunity to improve Core Web Vitals, but the framework is only one layer in the stack. Measure field data, watch for INP and LCP diverging, and keep an eye on third-party and asset pipelines when the numbers look “wrong” after a successful upgrade.
Getting the measurement story right is what turns a technical win into budget for the next round of improvements.
Related guides
Webpack to Vite mid-migration
Cut build times and improve dev feedback without migrating the app twice.
Vue 3 migration checklist
Verification items so performance work does not slip through the cracks.
DIY migration roadmap
Order of operations when you are upgrading the stack yourself.
Realistic migration timelines
Plan sprints when perf validation is part of the definition of done.
