links
Google's Knowledge Graph is a structured entity database that maps real-world objects - people, places, organisations, and concepts - to semantically rich attribute sets and inter-entity relationships, replacing string-matched keyword lookup with disambiguated, meaning-based retrieval. The system resolves lexical ambiguity (e.g., "Taj Mahal" as monument vs. musician vs. restaurant) by anchoring queries to canonical entities with unique identifiers, drawing from synthesised sources including Freebase, Wikipedia, and the CIA World Factbook to populate typed properties and relational edges. This shifts ranking and indexing logic from document-to-keyword co-occurrence toward entity-to-entity graph traversal, enabling query expansion, direct answer surfacing, and contextual result clustering without requiring exact-match signals in crawled content.
BERT (Bidirectional Encoder Representations from Transformers) pre-trains a deep transformer architecture using masked language modeling (MLM) and next sentence prediction (NSP) on unlabeled text, enabling simultaneous left-and-right context conditioning across all layers rather than the unidirectional or shallow-bidirectional approaches of predecessor models. Fine-tuned BERT established state-of-the-art performance on 11 NLP benchmarks - including a 7.7% absolute improvement on the GLUE score and 1.5 F1-point gain on SQuAD v1.1 - by learning rich, context-dependent token representations transferable to downstream tasks with minimal task-specific architecture modification. BERT's deep bi-directionality enables query-document semantic matching that captures polysemous terms, long-range syntactic dependencies, and implicit query intent, directly improving relevance ranking signals beyond keyword co-occurrence and making it deployable as a reranker layer over candidate retrieval sets.