Where Human Eyes Go on Google in 2026
Somewhere around 2006, an eye tracking study coined a term that shaped an entire industry’s thinking for the next decade. Researchers strapped cameras to people’s heads, showed them Google search results, and discovered that human eyes traced a predictable triangular pattern across the page. Top left, scanning right, then dropping down the left side. They called it the golden triangle, and every SEO strategy for years afterward was built on the assumption that this pattern was fixed.
It wasn’t. The pattern was a product of a specific interface at a specific moment in time. Google showed ten blue links. People read them top to bottom. The triangle was real, but it was describing behavior shaped by a page that no longer exists.
The Golden Triangle Is Dead
The original golden triangle study came from a marketing agency called Enquiro in 2005. The methodology was sound for its time. Twenty-odd participants. Eye tracking cameras. A search results page that looked nothing like the one you used this morning. No knowledge panels. No featured snippets. No “People Also Ask” boxes. No AI Overviews. No local packs. No shopping carousels. Just links.
A Miratech study conducted between 2008 and 2009 with participants from France, Japan, and the United States confirmed the general pattern but started picking up variations. The central results area drew 75% of eye activity. The top of the page got 19%, mostly because people looked back at the search bar to type new queries. Sponsored links on the right drew 6% and were only seen by a third of participants. But even in that study, gaze plots showed that each user read the results differently. Some focused on titles. Others focused on URLs. The “golden triangle” was already an average of behaviors that varied substantially between individuals.
The Nielsen Norman Group, which has been running eye tracking research for over 13 years, eventually cataloged four distinct scanning patterns people use on text-heavy pages: the F-pattern, the spotted pattern, the layer-cake pattern, and the commitment pattern. The F-pattern got all the press. It became the default assumption. And then Google redesigned its search page roughly once a quarter for the next fifteen years, and every redesign made the F-pattern less predictive.
By 2019, NN/G’s own research concluded that the modern search results page has so many design elements that users don’t have a simple way of picking out their preferred link. Their eye tracking data showed something closer to a pinball machine. Eyes bouncing between items, scanning non-sequentially, distributing attention across the page in ways that would have looked random to the researchers in 2005.
Where Eyes Actually Go Now
The Mediative study that Search Engine Land covered in 2014 put hard numbers on the shift. People were viewing more search results listings during a single session and spending less time on each one. The average viewing time per listing dropped from just under 2 seconds in 2005 to 1.17 seconds. That’s barely enough time to read a title and glance at the URL.
Two things were driving the change. First, Google’s interface had evolved to include multiple content types on a single page. Maps. Images. Videos. Knowledge panels. Each new element competed for the same finite attention span. The eyes had more to look at, so they spent less time on each thing.
Second, and this is the one most SEO analyses miss, mobile devices had fundamentally altered how people scan visual information. Smartphones trained an entire generation to scroll vertically. Swipe down. Scan. Keep going. That vertical scanning habit migrated back to desktop behavior. People who grew up scrolling on phones carried that muscle memory to their laptops. The horizontal left-to-right sweep that defined the golden triangle was replaced by a vertical bounce that prioritizes the first recognizable element in each listing over the systematic reading of every word.
The practical consequence was counterintuitive. Positions 2 through 4 on the search results page started seeing more click activity than they had in earlier years. The dominance of position 1 was eroding, not because position 1 became less visible, but because scanning behavior became faster and less linear. A user whose eyes bounce rather than sweep is more likely to notice the second or third result before committing to the first.
The CTR Numbers Nobody Wants to Hear
Backlinko analyzed 4 million Google search results and published the most widely cited CTR-by-position dataset in the industry. Their findings: the number 1 organic result gets an average click-through rate of 27.6%. Number 2 drops to 18.7%. Number 3 lands at 10.2%. After that, the decline is steep. The number 1 result is 10 times more likely to receive a click than the page sitting in position 10.
First Page Sage’s 2025 report, which combined data from multiple sources including Backlinko, SISTRIX, and their own internal datasets, put it differently. The top 3 organic results receive more than two-thirds of all clicks on the Google search page. That’s 68.7% of every click going to three listings. The other seven organic results on page one split the remaining 31.3%, and anything on page two is effectively invisible.
There’s a specific pain point in the data that matters more than the headline numbers. Moving from position 10 to position 9 doesn’t produce a statistically significant change in CTR. The difference between positions 8, 9, and 10 is negligible. But moving from position 3 to position 2 produces a significant boost. The CTR curve isn’t linear. It’s exponential at the top and flat at the bottom. Every position gained in the top five is worth dramatically more than every position gained in the bottom five.
I’d been thinking about these numbers differently since I saw them applied to a live site. Actually, let me rephrase that. I’d been quoting them to people without questioning them until I watched them not match what was happening in a real Google Search Console dashboard. The CTR on a branded query at position 6 was outperforming the expected curve. The CTR on an informational query at position 3 was underperforming. The averages are useful, but they describe a world that doesn’t account for search intent, SERP composition, or the thing that happened in 2025 that broke all the old models.
AI Overviews Changed the Math
Google launched AI Overviews during the March 2025 core update. By May 2025, GrowthSRC’s analysis showed AI Overviews appearing for 172,855 keywords, up from 10,000 keywords in August 2024. That’s a 17x expansion in under a year.
GrowthSRC ran a study of 200,000 keywords and found something that should have been on the front page of every marketing publication. Position 1 CTR dropped from 28% to 19%. That’s a 32% decline. Position 2 dropped from 20.83% to 12.60%, a 39% decline. The average decline across positions 1 through 5 was 17.92%.
Google’s CEO Sundar Pichai claimed in an interview that links within AI Overviews get higher click-through rates than traditional results. The industry data suggests otherwise. BrightEdge reported that since AI Overviews launched, impressions increased by 49% but CTR dropped by approximately 30%. More people see the results. Fewer people click.
The mechanism is straightforward. AI Overviews push organic results below the fold. The answer appears at the top of the page. A significant percentage of users get what they need without scrolling. Google is monetizing its own content at the expense of the publishers who created the source material. The same pattern as every platform play in the last twenty years, just executed by the company that controls the distribution channel. I wrote about watching this dynamic unfold on a brand new domain in How Google Parked at My Site in 28 Days, and even there the AI Overview question was already shaping which queries brought traffic and which ones Google kept for itself.
But here’s the counterintuitive finding that changes everything for people thinking about SEO strategy.
The Bottom of Page One Just Got Valuable
While positions 1 through 5 were losing clicks, positions 6 through 10 gained 30.63% more clicks year over year. That number is from the same GrowthSRC study that documented the decline at the top.
The behavioral explanation makes sense once you think about it. A user sees an AI Overview at the top of the page. It answers the surface-level question. But a subset of users don’t trust the AI answer, or they want more depth, or they want to verify what the AI told them against a real source. Those users scroll past the AI Overview and past the top organic results that the AI already cited, and they click on something lower on the page. Something that looks like it might offer a perspective the AI didn’t include.
A page ranking at position 8 in 2025 may deliver comparable traffic to what position 5 delivered in 2020. The CTR curve didn’t just shift down. It flattened. The gap between the top and the bottom of page one is narrower than it has ever been.
I haven’t tested this myself with enough controlled data to call it proven. The GrowthSRC dataset is large but it’s one study. The pattern could be temporary, a transitional behavior that normalizes as users get accustomed to AI answers. Or it could be permanent, a structural change in how people interact with search results when the first thing they see is a machine-generated summary. I genuinely don’t know which it is yet. Nobody does. But the data so far says the bottom of page one is worth more than it was, and that’s not nothing.
What Makes People Click
With 1.17 seconds of attention per listing, the title tag is doing almost all the work. Backlinko’s analysis found that title tags between 40 and 60 characters have a 33.3% higher CTR than titles outside that range. Too short and there isn’t enough information to make a decision. Too long and Google truncates it, cutting off the part of the title that might have been the reason to click.
Positive sentiment in titles improved CTR by approximately 4% in the same study. Not a massive number, but measurable. Titles framed as solutions or opportunities outperform titles framed as problems or threats, at least in aggregate. Whether that holds for specific niches is a different question.
URLs that contain terms similar to the search keyword have a 45% higher click-through rate compared to URLs with no keyword match. That’s an argument for clean, descriptive slugs over auto-generated URL strings. The slug isn’t just for Google’s crawler. It’s for the human eye that has 1.17 seconds to decide whether to click.
Something I changed my mind about recently. I used to think question-based titles were inherently stronger because they matched search intent. The Backlinko data says otherwise. Titles that contain a question and titles that don’t have roughly similar CTRs. The question format isn’t a magic bullet. The specificity of the title matters more than its grammatical structure.
The Meta Description Question
This is where the standard advice and the empirical observation diverge.
The textbook says write a compelling meta description for every page. Keep it under 155 characters. Include a call to action. Include the target keyword. Optimize it for the click.
Here’s what actually happens when you don’t write a meta description. Google pulls the most relevant passage from the page content based on the specific query the person typed. One page, a hundred different queries, a hundred different snippets. Each one tailored to the intent behind that particular search. A fixed meta description shows the same 160 characters regardless of what the person searched for. No description lets Google match the snippet to the question in real time.
And when Google pulls a passage that exceeds its display limit, it truncates it and adds a blue “Read more” link directly in the search snippet. That blue link is the only clickable element in the snippet area that isn’t the title or the URL. On a results page full of static grey descriptions, a blue “Read more” is a visual interrupt. The eye goes to color. The eye goes to action words. It’s a free call-to-action in the search results that nobody else on the page has.
Whether this consistently outperforms a well-written meta description across all niches and query types, I can’t say definitively. I haven’t run the kind of controlled A/B test that would prove it. What I can say is that the logic holds. Dynamic matching to intent should outperform static copy in a world where one page ranks for dozens of different queries. And the blue “Read more” link is a visual cue that the eye tracking research says should capture attention.
Most SEO practitioners would push back hard on this. (I know because I’ve had the conversation.) The meta description is one of the few things they can control in the SERP, and telling them to stop writing them feels like telling a pilot to let go of the steering yoke. But the autopilot might actually be better at this particular task. Google has more data on what makes people click than any copywriter does. If you want to see what the SEO industry looks like from the other side of the pitch, I wrote about that in Most Guest Posts Are Spam and the Industry Knows It.
Long-Tail Keywords and the Intent Signal
Keywords between 10 and 15 words in length get 2.62 times more clicks than single-word terms. That’s from the Backlinko study, and it makes intuitive sense. A person who types “best AI for philosophical discussions 2026” knows exactly what they want. Their intent is narrow. The result that matches it gets clicked immediately because the user isn’t browsing. They’re hunting.
A person who types “AI” is doing something fundamentally different. Mixed intent. Browsing. Not sure what they’re looking for yet. They might click five results or zero. The CTR for any individual result in that scenario is lower because the user hasn’t committed to a specific need.
This connects back to the AI Overview problem. AI Overviews appear most frequently on broad, informational queries. The kind of search where Google’s AI can synthesize a general answer from multiple sources. Long-tail keywords with specific intent show fewer AI Overviews because the specificity of the query makes a general answer less useful. The traditional CTR curve still applies to those searches.
The strategic implication: targeting long-tail, high-intent keywords gives you two advantages simultaneously. Higher CTR per impression because the user knows what they want. And lower AI Overview interference because Google’s AI doesn’t have a generic answer for a specific question.
Mobile Changed Desktop Behavior
The eye tracking research consistently shows that mobile browsing habits have migrated to desktop behavior. People scroll more than they used to on desktop. They scan vertically. They spend less time on any single element. The 1.17-second viewing time per listing is a desktop number that would have been unthinkable in 2005 when desktop was the only screen.
For sites where the audience is primarily desktop (research-heavy niches, B2B, technical content, academic topics), this vertical scanning behavior means the page structure matters more than it used to. Headings that stand out visually are scanned first. When headings and subheadings are descriptive and visually distinct, users engage in what NN/G calls the layer-cake pattern, reading headings and skipping body text until a heading matches their interest. Then they read the section underneath.
That’s an argument for heading structures that do real work. Not “Introduction” and “Background.” Headings that contain the answer or the tension. “AI Overviews Changed the Math” tells a scanning reader exactly what that section contains. “Section 3” tells them nothing.
There’s also a finding about banner blindness that most people have heard but few internalize. Users consistently ignore design elements that resemble advertisements or appear in locations traditionally dedicated to ads. If your content sits next to or near anything that looks like an ad, the eye skips it. This extends to internal elements that look promotional. A “Subscribe to Our Newsletter” box in the middle of an article gets the same treatment as a display ad. The eye tracks past it.
What This Actually Means for Your Strategy
The top 3 positions on Google capture roughly 50% of all clicks. That hasn’t changed, though the exact percentages have shifted. If you can reach the top 3 for a query that matters to your business, you should. The economics of position 1 versus position 4 are not close.
But the bottom of page one is no longer the wasteland it was. Positions 6 through 10 are gaining value as AI Overviews push users further down the page. If your pages are sitting in that range, the traffic might be better than you think, and it might be improving without you doing anything because Google is changing user behavior for you.
Title tags between 40 and 60 characters. Descriptive slugs with keyword-relevant terms. Dynamic snippets that match intent instead of fixed meta descriptions that match nothing perfectly. Long-tail keywords that dodge AI Overviews and capture high-intent users. Content structured with descriptive headings that serve the layer-cake scanning pattern. These are the levers that move CTR in a post-AI Overview world. I documented the 30-day results of applying these principles on a live site in What 30 Days of a Brand New Website Actually Looks Like, and the data tracks with everything the eye tracking research predicts.
The eye tracking research from 2005 built an industry on assumptions about a page that no longer exists. The 2026 search results page has AI answers, knowledge panels, video carousels, shopping widgets, People Also Ask boxes, and a user base trained by fifteen years of smartphone scrolling. The golden triangle didn’t evolve. It was demolished and replaced with something that changes every time Google ships an update.
I’m not sure the SEO industry has fully processed that yet. Most of the advice I see still assumes a user who reads search results the way someone in 2005 would have. Top to bottom. Left to right. One click and done. The data says otherwise, and the gap between what the data says and what most strategies assume is where the opportunity sits.
Whether that gap closes quickly or slowly depends on how fast people update their mental models. The eye tracking cameras already moved on. The question is whether the strategists will follow.
FAQ
What is the click-through rate for the number 1 position on Google?
The number 1 organic position on Google receives an average CTR of approximately 19% to 27.6% depending on the study and whether AI Overviews are present. With AI Overviews on the page, position 1 CTR dropped roughly 32% from 2024 to 2025.
How has eye tracking behavior changed on Google search results?
The golden triangle F-pattern from 2005 no longer applies. Modern eye tracking studies show users scan search results in a pinball-like pattern, bouncing between results rather than reading top to bottom. The average viewing time per listing dropped from 2 seconds in 2005 to 1.17 seconds.
How do AI Overviews affect organic click-through rates?
AI Overviews caused a 32% decline in position 1 CTR and a 39% decline in position 2 CTR. Positions 6 through 10 saw a 30.63% increase in clicks as users scroll past AI answers to verify information from traditional sources.
What title tag length gets the highest click-through rate?
Title tags between 40 and 60 characters achieve 33.3% higher CTR compared to titles outside that range, according to Backlinko’s analysis of 4 million Google search results.
Are positions 6 through 10 on Google worth targeting?
In the AI Overview era, positions 6 through 10 gained over 30% more clicks year over year. A page ranking at position 8 in 2025 may deliver comparable traffic to what position 5 delivered in 2020. The CTR curve has flattened significantly.