RAG:7个检索增强生成技术的解析(含实现代码) 原创 精华

发布于 2025-7-14 08:49
浏览
0收藏

在当今数字化时代,自然语言处理(NLP)和生成式人工智能(AI)正以前所未有的速度发展。其中,检索增强生成(Retrieval-Augmented Generation,简称 RAG)技术脱颖而出,成为这一领域的明星。RAG 通过结合信息检索和语言模型的文本生成能力,打造出更精准、更实时且更可靠的智能系统。今天,我们将深入探讨 RAG 的各种高级技术,看看它们是如何塑造未来智能对话、智能客服和语义搜索系统的。

一、什么是 RAG?基础架构与原理

RAG 是一种机器学习架构,它由以下三个核心部分组成:

  • 检索系统:从知识库中检索相关信息。
  • 生成模型:基于检索到的信息生成回答。
  • 融合机制:将外部知识与生成能力相结合。

这种架构让 RAG 能够在回答问题时,既利用语言模型的强大生成能力,又借助外部数据的丰富性和准确性,从而避免了传统生成模型可能出现的“幻觉”问题,即生成与事实不符的内容。

以下是 RAG 的基础实现代码:

import numpy as np
from sentence_transformers import SentenceTransformer
import faiss
from transformers import pipeline

class BasicRAG:
    def __init__(self, documents, model_name="all-MiniLM-L6-v2"):
        self.documents = documents
        self.encoder = SentenceTransformer(model_name)
        self.generator = pipeline("text-generation", model="microsoft/DialoGPT-medium")
        self.build_index()

    def build_index(self):
        """构建 FAISS 索引以实现语义搜索"""
        embeddings = self.encoder.encode(self.documents)
        self.index = faiss.IndexFlatIP(embeddings.shape[1])
        self.index.add(embeddings.astype('float32'))

    def retrieve(self, query, k=3):
        """检索相关文档"""
        query_embedding = self.encoder.encode([query])
        scores, indices = self.index.search(query_embedding.astype('float32'), k)
        return [self.documents[i] for i in indices[0]]

    def generate(self, query, context):
        """基于上下文生成回答"""
        prompt = f"Context: {context}\nQuestion: {query}\nAnswer:"
        response = self.generator(prompt, max_length=200)
        return response[0]['generated_text']

二、RAG 的高级技术实现

1. CRAG(Corrective Retrieval-Augmented Generation):纠错式检索增强生成

CRAG 是 RAG 的一种进化版本,它引入了纠错机制,以提高信息检索的质量。这对于需要高精度的企业问答系统和虚拟助手来说至关重要。例如,在处理复杂的客户咨询时,CRAG 能够自动纠正不准确的回答,确保提供的信息既可靠又相关。

以下是 CRAG 的实现代码:

import torch
from transformers import AutoTokenizer, AutoModel
from sklearn.metrics.pairwise import cosine_similarity

class CRAG:
    def __init__(self, documents, confidence_threshold=0.7):
        self.documents = documents
        self.threshold = confidence_threshold
        self.tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
        self.model = AutoModel.from_pretrained('bert-base-uncased')
        self.build_embeddings()

    def build_embeddings(self):
        """为文档构建嵌入向量"""
        self.doc_embeddings = []
        for doc in self.documents:
            inputs = self.tokenizer(doc, return_tensors='pt', truncation=True, padding=True)
            with torch.no_grad():
                outputs = self.model(**inputs)
                embedding = outputs.last_hidden_state.mean(dim=1)
                self.doc_embeddings.append(embedding.numpy())

    def corrective_retrieve(self, query, k=5):
        """基于置信度的纠错检索"""
        query_inputs = self.tokenizer(query, return_tensors='pt', truncation=True, padding=True)
        with torch.no_grad():
            query_outputs = self.model(**query_inputs)
            query_embedding = query_outputs.last_hidden_state.mean(dim=1)

        similarities = []
        for doc_emb in self.doc_embeddings:
            sim = cosine_similarity(query_embedding.numpy(), doc_emb)[0][0]
            similarities.append(sim)

        top_indices = np.argsort(similarities)[-k:][::-1]
        corrected_docs = []

        for idx in top_indices:
            if similarities[idx] > self.threshold:
                corrected_docs.append({
                    'document': self.documents[idx],
                    'confidence': similarities[idx]
                })

        ifnot corrected_docs:
            return self.external_search(query)

        return corrected_docs

    def external_search(self, query):
        """当置信度低时进行外部搜索"""
        return [{'document': f"External search result for: {query}", 'confidence': 0.5}]

2. CAG(Chain-of-Thought Augmented Generation):思维链增强生成

CAG 将思维链推理与信息检索相结合,能够处理复杂的多步骤问题。这对于需要解释能力的 AI 系统和智能辅导系统来说非常关键。例如,一个虚拟导师可以利用 CAG 来逐步分解复杂问题,为学生提供清晰的解答路径。

以下是 CAG 的实现代码:

class ChainOfThoughtRAG:
    def __init__(self, knowledge_base):
        self.kb = knowledge_base
        self.reasoning_steps = []

    def decompose_query(self, complex_query):
        """将复杂问题分解为多个子问题"""
        decomposition_prompt = f"""
        Decompose the following complex question into smaller steps:
        Question: {complex_query}

        Steps:
        1.
        2.
        3.
        """
        steps = self.llm_decompose(decomposition_prompt)
        return steps

    def chain_retrieve_and_reason(self, query):
        """执行思维链检索与推理"""
        steps = self.decompose_query(query)
        reasoning_chain = []

        for i, step in enumerate(steps):
            relevant_docs = self.retrieve_for_step(step)
            step_reasoning = self.reason_step(step, relevant_docs, reasoning_chain)
            reasoning_chain.append({
                'step': i + 1,
                'question': step,
                'evidence': relevant_docs,
                'reasoning': step_reasoning
            })

        final_answer = self.synthesize_chain(reasoning_chain, query)
        return final_answer, reasoning_chain

    def reason_step(self, step, evidence, previous_reasoning):
        """对当前步骤进行推理"""
        context = "\n".join([doc['content'] for doc in evidence])
        previous_context = "\n".join([r['reasoning'] for r in previous_reasoning])

        reasoning_prompt = f"""
        Previous context: {previous_context}
        Evidence: {context}
        Current question: {step}

        Step-by-step reasoning:
        """
        return self.generate_reasoning(reasoning_prompt)

    def synthesize_chain(self, reasoning_chain, original_query):
        """合成最终答案"""
        chain_summary = "\n".join([f"Step {r['step']}: {r['reasoning']}"for r in reasoning_chain])
        synthesis_prompt = f"""
        Original question: {original_query}
        Reasoning chain:
        {chain_summary}

        Integrated final answer:
        """
        return self.generate_final_answer(synthesis_prompt)

3. Graph RAG:知识图谱中的智能导航

Graph RAG 利用知识图谱捕捉实体之间的复杂关系,提供更丰富的语境检索和高级语义导航。例如,在药物发现领域,Graph RAG 可以通过分子网络的智能导航,帮助研究人员快速找到潜在的药物靶点。

以下是 Graph RAG 的实现代码:

import networkx as nx
from neo4j import GraphDatabase
import torch
from torch_geometric.nn import GCNConv

class GraphRAG:
    def __init__(self, neo4j_uri, username, password):
        self.driver = GraphDatabase.driver(neo4j_uri, auth=(username, password))
        self.graph = nx.Graph()
        self.entity_embeddings = {}
        self.build_graph()

    def build_graph(self):
        """构建知识图谱"""
        with self.driver.session() as session:
            nodes_query = "MATCH (n) RETURN n.id as id, n.type as type, n.properties as props"
            edges_query = "MATCH (a)-[r]->(b) RETURN a.id as source, b.id as target, type(r) as relation"

            nodes = session.run(nodes_query)
            edges = session.run(edges_query)

            for node in nodes:
                self.graph.add_node(node['id'], type=node['type'], **node['props'])

            for edge in edges:
                self.graph.add_edge(edge['source'], edge['target'], relation=edge['relation'])

    def graph_walk_retrieve(self, query_entities, max_hops=3):
        """基于图遍历的检索"""
        relevant_subgraph = nx.Graph()
        visited = set()
        queue = [(entity, 0) for entity in query_entities]

        while queue:
            current_entity, depth = queue.pop(0)

            if depth > max_hops or current_entity in visited:
                continue

            visited.add(current_entity)

            if current_entity in self.graph:
                relevant_subgraph.add_node(current_entity, **self.graph.nodes[current_entity])

                for neighbor in self.graph.neighbors(current_entity):
                    edge_data = self.graph.edges[current_entity, neighbor]

                    if self.is_relevant_relation(edge_data['relation']):
                        relevant_subgraph.add_edge(current_entity, neighbor, **edge_data)
                        queue.append((neighbor, depth + 1))

        return relevant_subgraph

    def gnn_enhanced_retrieval(self, subgraph, query_embedding):
        """使用 GNN 提升检索效果"""
        node_features = self.extract_node_features(subgraph)
        edge_index = self.get_edge_index(subgraph)

        gcn = GCNConv(node_features.size(1), 128)
        enhanced_embeddings = gcn(node_features, edge_index)

        similarities = torch.cosine_similarity(query_embedding.unsqueeze(0), enhanced_embeddings)
        top_nodes = torch.topk(similarities, k=10)
        return top_nodes

    def multi_hop_reasoning(self, start_entities, target_concept):
        """多跳推理"""
        paths = []

        for start in start_entities:
            try:
                shortest_paths = nx.shortest_path(self.graph, start, target_concept)
                path_knowledge = self.extract_path_knowledge(shortest_paths)
                paths.append(path_knowledge)
            except nx.NetworkXNoPath:
                continue

        return self.synthesize_multi_hop_answer(paths)

4. Agentic RAG:自主决策与动态适应

Agentic RAG 引入了自主代理,这些代理可以根据上下文动态决策何时以及如何检索信息。例如,在金融市场分析中,Agentic RAG 可以实时分析数据,并根据市场变化动态调整策略。

以下是 Agentic RAG 的实现代码:

from enum import Enum
import asyncio
from typing import List, Dict, Any

class AgentAction(Enum):
    RETRIEVE = "retrieve"
    GENERATE = "generate"
    VERIFY = "verify"
    SEARCH_EXTERNAL = "search_external"
    DECOMPOSE = "decompose"

class RAGAgent:
    def __init__(self, tools, memory_size=1000):
        self.tools = tools
        self.memory = []
        self.memory_size = memory_size
        self.state = "idle"
        self.confidence_threshold = 0.8

    asyncdef plan_and_execute(self, user_query):
        """自主规划与执行"""
        query_complexity = self.analyze_query_complexity(user_query)
        action_plan = self.create_action_plan(user_query, query_complexity)
        results = []

        for action in action_plan:
            result = await self.execute_action(action, user_query, results)
            results.append(result)

            if self.should_replan(result):
                new_plan = self.replan(user_query, results)
                action_plan.extend(new_plan)

        final_answer = self.synthesize_results(results, user_query)
        self.update_memory(user_query, final_answer, results)
        return final_answer

    def create_action_plan(self, query, complexity):
        """根据复杂度创建行动计划"""
        if complexity == "simple":
            return [{"action": AgentAction.RETRIEVE, "params": {"k": 3}},
                    {"action": AgentAction.GENERATE, "params": {"style": "direct"}}]
        elif complexity == "complex":
            return [{"action": AgentAction.DECOMPOSE, "params": {}},
                    {"action": AgentAction.RETRIEVE, "params": {"k": 5}},
                    {"action": AgentAction.VERIFY, "params": {"threshold": 0.7}},
                    {"action": AgentAction.GENERATE, "params": {"style": "detailed"}}]
        else:  # very_complex
            return [{"action": AgentAction.DECOMPOSE, "params": {}},
                    {"action": AgentAction.RETRIEVE, "params": {"k": 10}},
                    {"action": AgentAction.SEARCH_EXTERNAL, "params": {}},
                    {"action": AgentAction.VERIFY, "params": {"threshold": 0.9}},
                    {"action": AgentAction.GENERATE, "params": {"style": "comprehensive"}}]

    asyncdef execute_action(self, action_config, query, previous_results):
        """执行特定动作"""
        action = action_config["action"]
        params = action_config["params"]

        if action == AgentAction.RETRIEVE:
            returnawait self.tools["retriever"].retrieve(query, **params)
        elif action == AgentAction.GENERATE:
            context = self.build_context(previous_results)
            returnawait self.tools["generator"].generate(query, context, **params)
        elif action == AgentAction.VERIFY:
            returnawait self.verify_information(previous_results, **params)
        elif action == AgentAction.SEARCH_EXTERNAL:
            returnawait self.tools["Web Search"].search(query, **params)
        elif action == AgentAction.DECOMPOSE:
            returnawait self.decompose_complex_query(query)

    def should_replan(self, result):
        """根据结果决定是否重新规划"""
        if hasattr(result, 'confidence') and result.confidence < self.confidence_threshold:
            returnTrue
        if hasattr(result, 'error') and result.error:
            returnTrue
        returnFalse

    def adaptive_learning(self, feedback):
        """基于反馈的自适应学习"""
        if feedback['success']:
            self.confidence_threshold *= 0.95# 更加自信
        else:
            self.confidence_threshold *= 1.05# 更加谨慎
        self.update_strategy_memory(feedback)

5. Adaptive RAG:个性化与持续学习

Adaptive RAG 实现了持续学习和动态个性化,能够根据用户的偏好和特定上下文进行调整。例如,在个性化学习平台上,Adaptive RAG 可以根据学生的学习风格和进度,提供定制化的学习内容。

以下是 Adaptive RAG 的实现代码:

import numpy as np
from sklearn.cluster import KMeans
from collections import defaultdict
import pickle

class AdaptiveRAG:
    def __init__(self, base_retriever, user_profile_path=None):
        self.base_retriever = base_retriever
        self.user_profiles = defaultdict(dict)
        self.adaptation_history = []
        self.context_clusters = None
        self.load_user_profiles(user_profile_path)

    def adapt_to_user(self, user_id, query, feedback_history):
        """根据用户偏好调整系统"""
        if user_id notin self.user_profiles:
            self.user_profiles[user_id] = self.create_user_profile()

        profile = self.user_profiles[user_id]
        self.update_preferences(profile, feedback_history)
        adapted_strategy = self.adapt_retrieval_strategy(profile, query)
        return adapted_strategy

    def create_user_profile(self):
        """创建用户初始偏好配置"""
        return {
            'domain_preferences': {},
            'complexity_preference': 'medium',
            'response_style': 'balanced',
            'topic_interests': [],
            'feedback_patterns': [],
            'success_metrics': {
                'accuracy': 0.5,
                'relevance': 0.5,
                'completeness': 0.5
            }
        }

    def update_preferences(self, profile, feedback_history):
        """根据反馈更新用户偏好"""
        for feedback in feedback_history[-10:]:
            for metric, value in feedback['metrics'].items():
                current = profile['success_metrics'][metric]
                profile['success_metrics'][metric] = 0.9 * current + 0.1 * value

            domain = feedback.get('domain')
            if domain:
                if domain notin profile['domain_preferences']:
                    profile['domain_preferences'][domain] = 0.5

                if feedback['rating'] > 3:
                    profile['domain_preferences'][domain] *= 1.1
                else:
                    profile['domain_preferences'][domain] *= 0.9

    def adapt_retrieval_strategy(self, profile, query):
        """根据用户偏好调整检索策略"""
        query_domain = self.detect_domain(query)
        k = self.calculate_adaptive_k(profile, query_domain)
        similarity_threshold = self.calculate_threshold(profile)
        preferred_sources = self.select_sources(profile, query_domain)
        return {
            'k': k,
            'threshold': similarity_threshold,
            'sources': preferred_sources,
            'reranking_weights': self.get_reranking_weights(profile)
        }

    def contextual_adaptation(self, context_type, session_history):
        """基于会话上下文的动态调整"""
        if self.context_clusters isNone:
            self.build_context_clusters()

        current_cluster = self.identify_context_cluster(context_type)
        similar_sessions = self.find_similar_sessions(current_cluster)
        adaptation_params = self.learn_from_similar_sessions(similar_sessions)
        return adaptation_params

    def meta_learning_update(self, task_performance):
        """元学习:根据任务表现优化调整策略"""
        performance_patterns = self.analyze_performance_patterns(task_performance)

        for pattern in performance_patterns:
            if pattern['success_rate'] > 0.8:
                self.promote_strategy(pattern['strategy'])
            elif pattern['success_rate'] < 0.4:
                self.demote_strategy(pattern['strategy'])

        self.save_adaptation_knowledge()

    def real_time_adaptation(self, query, initial_results, user_interaction):
        """实时动态调整"""
        interaction_signals = self.extract_interaction_signals(user_interaction)

        if interaction_signals['needs_more_detail']:
            enhanced_results = self.enhance_detail(initial_results)
            return enhanced_results
        elif interaction_signals['needs_simplification']:
            simplified_results = self.simplify_results(initial_results)
            return simplified_results
        elif interaction_signals['needs_different_perspective']:
            alternative_results = self.find_alternative_perspective(query)
            return alternative_results

        return initial_results

6. Multi Modal RAG:多模态信息整合

Multi Modal RAG 处理并整合文本、图像、音频和视频等多种模态的信息,提供全面的跨模态检索。例如,在医疗诊断中,Multi Modal RAG 可以同时分析病人的病历文本、医学影像和生理信号,为医生提供更全面的诊断支持。

以下是 Multi Modal RAG 的实现代码:

import torch
import torchvision.transforms as transforms
from transformers import CLIPModel, CLIPProcessor
import librosa
import cv2
from PIL import Image

class MultiModalRAG:
    def __init__(self):
        self.clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
        self.clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
        self.text_index = None
        self.image_index = None
        self.audio_index = None
        self.video_index = None
        self.multimodal_embeddings = {}

    def process_multimodal_document(self, document):
        """处理多模态文档"""
        processed_doc = {
            'id': document['id'],
            'embeddings': {},
            'content': {}
        }

        if'text'in document:
            text_embedding = self.encode_text(document['text'])
            processed_doc['embeddings']['text'] = text_embedding
            processed_doc['content']['text'] = document['text']

        if'images'in document:
            image_embeddings = []
            for img_path in document['images']:
                img_embedding = self.encode_image(img_path)
                image_embeddings.append(img_embedding)
            processed_doc['embeddings']['images'] = image_embeddings
            processed_doc['content']['images'] = document['images']

        if'audio'in document:
            audio_embedding = self.encode_audio(document['audio'])
            processed_doc['embeddings']['audio'] = audio_embedding
            processed_doc['content']['audio'] = document['audio']

        if'video'in document:
            video_embedding = self.encode_video(document['video'])
            processed_doc['embeddings']['video'] = video_embedding
            processed_doc['content']['video'] = document['video']

        return processed_doc

    def encode_text(self, text):
        """使用 CLIP 编码文本"""
        inputs = self.clip_processor(text=text, return_tensors="pt")
        with torch.no_grad():
            text_features = self.clip_model.get_text_features(**inputs)
        return text_features.numpy()

    def encode_image(self, image_path):
        """使用 CLIP 编码图像"""
        image = Image.open(image_path)
        inputs = self.clip_processor(images=image, return_tensors="pt")
        with torch.no_grad():
            image_features = self.clip_model.get_image_features(**inputs)
        return image_features.numpy()

    def encode_audio(self, audio_path):
        """提取音频特征"""
        y, sr = librosa.load(audio_path)
        mfccs = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=13)
        spectral_centroids = librosa.feature.spectral_centroid(y=y, sr=sr)
        chroma = librosa.feature.chroma_stft(y=y, sr=sr)
        audio_features = np.concatenate([np.mean(mfccs, axis=1), np.mean(spectral_centroids, axis=1), np.mean(chroma, axis=1)])
        return audio_features

    def encode_video(self, video_path):
        """提取视频的关键帧并编码"""
        cap = cv2.VideoCapture(video_path)
        frame_embeddings = []
        frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
        interval = max(1, frame_count // 10)

        for i in range(0, frame_count, interval):
            cap.set(cv2.CAP_PROP_POS_FRAMES, i)
            ret, frame = cap.read()

            if ret:
                frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
                pil_image = Image.fromarray(frame_rgb)
                frame_embedding = self.encode_image(pil_image)
                frame_embeddings.append(frame_embedding)

        cap.release()

        if frame_embeddings:
            return np.mean(frame_embeddings, axis=0)
        else:
            return np.zeros(512)

    def cross_modal_retrieve(self, query, modalities=['text', 'image'], k=5):
        """跨模态检索"""
        query_embeddings = {}

        if'text'in modalities and isinstance(query, str):
            query_embeddings['text'] = self.encode_text(query)

        if'image'in modalities and hasattr(query, 'image_path'):
            query_embeddings['image'] = self.encode_image(query.image_path)

        all_results = []

        for modality, query_emb in query_embeddings.items():
            modal_results = self.search_modality(modality, query_emb, k)
            all_results.extend(modal_results)

        fused_results = self.multimodal_fusion(all_results, query_embeddings)
        return fused_results[:k]

    def multimodal_fusion(self, results, query_embeddings):
        """多模态结果融合"""
        scored_results = []

        for result in results:
            total_score = 0
            modality_count = 0

            for modality, query_emb in query_embeddings.items():
                if modality in result['embeddings']:
                    similarity = self.calculate_similarity(query_emb, result['embeddings'][modality])
                    total_score += similarity
                    modality_count += 1

            if modality_count > 0:
                avg_score = total_score / modality_count
                multimodal_bonus = 1 + (modality_count - 1) * 0.1
                final_score = avg_score * multimodal_bonus

                scored_results.append({
                    'document': result,
                    'score': final_score
                })

        scored_results.sort(key=lambda x: x['score'], reverse=True)
        return [r['document'] for r in scored_results]

    def generate_multimodal_response(self, query, retrieved_docs):
        """生成多模态回答"""
        response = {
            'text': '',
            'images': [],
            'audio': None,
            'video': None
        }

        text_content = []
        for doc in retrieved_docs:
            if'text'in doc['content']:
                text_content.append(doc['content']['text'])

        if text_content:
            response['text'] = self.generate_text_response(query, text_content)

        relevant_images = []
        for doc in retrieved_docs:
            if'images'in doc['content']:
                relevant_images.extend(doc['content']['images'])

        response['images'] = relevant_images[:3]

        return response

7. W-RAG(Web-Enhanced RAG):与网络的实时整合

W-RAG 将实时网络搜索与本地检索相结合,提供最新信息和更广泛的知识覆盖。例如,在新闻报道中,W-RAG 可以实时获取最新的新闻动态,并将其整合到智能写作系统中。

以下是 W-RAG 的实现代码:

from bs4 import BeautifulSoup
import asyncio
import aiohttp
from datetime import datetime, timedelta

class WebEnhancedRAG:
    def __init__(self, local_retriever, web_apis):
        self.local_retriever = local_retriever
        self.web_apis = web_apis
        self.cache = {}
        self.cache_ttl = timedelta(hours=1)
        self.web_sources = {
            'news': ['reuters.com', 'bbc.com', 'cnn.com'],
            'academic': ['arxiv.org', 'scholar.google.com'],
            'technical': ['stackoverflow.com', 'github.com'],
            'general': ['wikipedia.org']
        }

    asyncdef hybrid_retrieve(self, query, local_weight=0.6, web_weight=0.4):
        """混合检索:本地 + 网络"""
        local_results = await self.local_retriever.retrieve(query)
        web_needed = self.assess_web_necessity(query, local_results)

        if web_needed:
            web_results = await self.web_search_async(query)
            combined_results = self.combine_local_web(local_results, web_results, local_weight, web_weight)
        else:
            combined_results = local_results

        return combined_results

    def assess_web_necessity(self, query, local_results):
        """评估是否需要网络搜索"""
        temporal_indicators = ['latest', 'recent', 'current', 'today', 'news']
        has_temporal = any(indicator in query.lower() for indicator in temporal_indicators)
        local_confidence = self.calculate_local_confidence(local_results)
        dynamic_topics = ['stock', 'weather', 'news', 'covid', 'election']
        is_dynamic = any(topic in query.lower() for topic in dynamic_topics)

        return has_temporal or local_confidence < 0.7or is_dynamic

    asyncdef web_search_async(self, query):
        """异步网络搜索"""
        cache_key = f"web_{hash(query)}"
        if cache_key in self.cache:
            cached_result, timestamp = self.cache[cache_key]
            if datetime.now() - timestamp < self.cache_ttl:
                return cached_result

        search_type = self.classify_query_type(query)
        tasks = []

        if'google'in self.web_apis:
            tasks.append(self.google_search(query))
        if'bing'in self.web_apis:
            tasks.append(self.bing_search(query))

        if search_type == 'news':
            tasks.append(self.news_search(query))
        elif search_type == 'academic':
            tasks.append(self.academic_search(query))
        elif search_type == 'technical':
            tasks.append(self.technical_search(query))

        results = await asyncio.gather(*tasks, return_exceptions=True)
        processed_results = self.process_web_results(results, query)
        self.cache[cache_key] = (processed_results, datetime.now())
        return processed_results

    asyncdef google_search(self, query):
        """使用 Google Custom Search API 搜索"""
        api_key = self.web_apis['google']['api_key']
        cx = self.web_apis['google']['cx']
        url = "https://www.googleapis.com/customsearch/v1"
        params = {
            'key': api_key,
            'cx': cx,
            'q': query,
            'num': 10
        }

        asyncwith aiohttp.ClientSession() as session:
            asyncwith session.get(url, params=params) as response:
                data = await response.json()
                results = []
                for item in data.get('items', []):
                    results.append({
                        'title': item['title'],
                        'url': item['link'],
                        'snippet': item['snippet'],
                        'source': 'google'
                    })
                return results

    asyncdef news_search(self, query):
        """在新闻源中搜索"""
        news_results = []
        if'newsapi'in self.web_apis:
            api_key = self.web_apis['newsapi']['api_key']
            url = "https://newsapi.org/v2/everything"
            params = {
                'apiKey': api_key,
                'q': query,
                'sortBy': 'publishedAt',
                'pageSize': 10
            }

            asyncwith aiohttp.ClientSession() as session:
                asyncwith session.get(url, params=params) as response:
                    data = await response.json()
                    for article in data.get('articles', []):
                        news_results.append({
                            'title': article['title'],
                            'url': article['url'],
                            'snippet': article['description'],
                            'published': article['publishedAt'],
                            'source': 'news'
                        })
        return news_results

    def real_time_fact_checking(self, claim, sources):
        """实时事实核查"""
        fact_check_results = []

        for source in sources:
            source_info = self.extract_source_info(source)
            credibility_score = self.assess_source_credibility(source_info)
            consistency_score = self.check_claim_consistency(claim, source['content'])
            fact_check_results.append({
                'source': source,
                'credibility': credibility_score,
                'consistency': consistency_score,
                'verdict': self.determine_verdict(credibility_score, consistency_score)
            })

        return self.aggregate_fact_check(fact_check_results)

    def temporal_aware_retrieval(self, query, time_sensitivity='medium'):
        """时间感知检索"""
        time_windows = {
            'high': timedelta(hours=1),
            'medium': timedelta(days=1),
            'low': timedelta(weeks=1)
        }
        cutoff_time = datetime.now() - time_windows[time_sensitivity]
        recent_results = [result for result in self.all_results if result.get('timestamp', datetime.min) > cutoff_time]

        if len(recent_results) < 3:
            cutoff_time = datetime.now() - time_windows['low']
            recent_results = [r for r in self.all_results if r.get('timestamp', datetime.min) > cutoff_time]

        return recent_results

    def web_content_extraction(self, url):
        """智能提取网页内容"""
        try:
            response = requests.get(url, timeout=10)
            soup = BeautifulSoup(response.content, 'html.parser')
            for element in soup(['script', 'style', 'nav', 'footer', 'aside']):
                element.decompose()
            main_content = self.extract_main_content(soup)
            metadata = self.extract_metadata(soup)
            return {
                'content': main_content,
                'metadata': metadata,
                'url': url,
                'extracted_at': datetime.now()
            }
        except Exception as e:
            return {'error': str(e), 'url': url}

三、RAG 的未来趋势

RAG 的未来发展方向令人期待,主要包括以下几个方面:

  • 高级多模态整合:无缝处理所有模态的信息,让 AI 能够同时理解文本、图像、音频和视频。
  • 因果推理:理解因果关系,而不仅仅是相关性,使 AI 的回答更具逻辑性和说服力。
  • 极致个性化:实时适应个体需求,为每个用户提供完全定制化的体验。
  • 计算效率优化:通过优化算法和硬件加速,支持大规模部署,让 RAG 技术能够应用于更广泛的场景。
  • 完全可解释性:决策过程完全透明,用户可以清楚地了解 AI 是如何得出结论的。

四、RAG 的实际应用

RAG 的各种高级技术已经在多个领域得到了广泛应用,以下是一些典型的应用场景:

  • 智能客服:自动纠正不准确的回答,提升客户满意度。
  • 虚拟医疗助手:验证关键信息,辅助医生进行诊断。
  • 个性化学习平台:根据学生的学习风格提供定制化教学内容。
  • 智能写作系统:实时获取最新信息,辅助新闻报道和内容创作。

五、总结

RAG 的高级技术为智能系统的发展带来了新的可能性。从纠错机制到多模态整合,从因果推理到极致个性化,这些技术不仅提升了系统的性能,还为用户提供了更智能、更贴心的服务。未来,随着技术的不断进步,RAG 将在更多领域发挥更大的作用,让我们拭目以待!

本文转载自Halo咯咯    作者:基咯咯

©著作权归作者所有,如需转载,请注明出处,否则将追究法律责任
已于2025-7-14 08:49:15修改
收藏
回复
举报
回复
相关推荐