目前大企业AI已用于文件审核,包括批记录和分析报告。偏差、变更辅助。其他GMP流程支持。
这部分我今年有个项目就是文件审核,故此学习本文。
目前业内主要通过多层措施确保结果可信这些包括消除、替代、技术控制、程序控制和行为控制。也称之未纵深防御,在整个系统的数据流中采用多层保护。通过在不同点实施顺序控制,冗余的保护层可以相互弥补彼此的局限性。使假阴性(遗漏质量问题)数量相等或减少、减少假阳性(或不必要的调查)
七层的方法,
第一层:输入护栏Input Guardrails(数据验证,相似性分析,限制环境等)
第二层:通过检索增强生成RAG获取领域知识(RAG 实现应与适用数据源集成,如MES LIMS QDT集成?)
第三层:LLM(大型语言模型)选择和能力
第四层:LLM(大型语言模型) 微调(不同的预期用途呈现不同的风险特征,提示词调整等)
第五层:输出护栏Output Guardrails(降低低置信数值输出,确保关键信息一致性,多LLM间隔离,增加独立“LLM 作为裁判”)
第六层:系统监测(技术、流程、行为三级管理,置信度低于可接受水平时,自动将案例导向人工操作员,人工干预等)
第七层:可解释性和透明度(一切透明,思维链确认,探针探测异常,LLM作为裁判)
Large language models (LLMs) can enable decision support in pharmaceutical manufacturing but can also create risks through overreliance.1, 2 LLM-enabled computerized systems have the potential to increase assessment consistency and performance in a scalable way for automated decision-making workflows.
大型语言模型(LLMs)能够支持药品制造中的决策,但也可能因过度依赖而带来风险。LLM 驱动的计算机化系统有潜力以可扩展的方式增加评估一致性和性能,用于自动化决策工作流程。
This article proposes a GAMP-aligned framework with layered risk control strategies to realize these benefits while balancing the associated risks such as overreliance, hallucinations, and limited explainability.
本文提出一个符合 GAMP 要求的框架,采用分层风险控制策略,以实现这些效益,同时平衡相关风险,如过度依赖、幻觉和可解释性有限。
Since the emergence of LLMs like ChatGPT, the pharmaceutical industry has explored their use for decision support such as brainstorming, writing assistance, or summarization. 1 These early decision support applications rely on human competence, responsibility, and artificial intelligence (AI) literacy to ensure safe use within regulated environments despite risks of overreliance..2, 3 Recent advances in foundation models, especially multimodal LLMs, now demonstrate the ability to process data in various forms. These include text, images, and audio inputs simultaneously. Coupled with longer task horizons and improved reasoning abilities, this expanded capability shows promise for automating GMP workflows with diverse data types and assessments previously not possible with traditional deterministic systems (identical input produces identical output).4, 5
自 ChatGPT 等 LLMs 出现以来,制药行业已探索将其用于决策支持,如头脑风暴、写作辅助或摘要。这些早期的决策支持应用依赖于人类能力、职责和人工智能(AI)素养,以确保在受监管环境中安全使用,尽管存在过度依赖的风险。基础模型的最新进展,特别是多模态 LLMs(西门的注释多模态 LLMs(Multimodal Large Language Models,多模态大语言模型)是指把传统只“读文字”的大语言模型(LLM)升级,使其能同时理解、推理并生成文本、图像、音频、视频等多种数据模态的一类大模型。核心思想是:以 LLM 为“中枢大脑”,通过额外的编码器或连接器把不同模态信息映射到同一语义空间,实现真正的跨模态理解与生成GPT-4V、Gemini 1.5 Pro、Claude 3(Sonnet/Opus)、Qwen-VL、DeepSeek-VL 等均已商用,支持图文对话、视频摘要、文档 OCR 问答),现在展示了处理多种形式数据的能力。这些包括文本、图像和音频输入的同步处理。结合更长的任务视野和改进的推理能力,这种扩展的能力显示出自动化具有多种数据类型和评估的 GMP 工作流程的潜力,而这些在传统确定性系统(相同输入产生相同输出)中是不可能的。
These advances place the pharmaceutical industry at a transition point from decision support toward automated decision-making in GMP environments. To ensure compliant and robust systems, such LLM-enabled computerized systems require risk controls tailored to the nondeterministic nature of LLMs where identical inputs may yield different outputs and factual hallucinations are possible. Building on the ISPE GAMP® Guide: Artificial Intelligence, this article proposes a layered defense-in-depth framework for mitigating those risks.6 It defines seven complementary control layers spanning input and output guardrails, domain knowledge, LLM selection, monitoring and explainability, and transparency. Considerations on each based on intended use and risks can provide a path to compliant automation while maintaining or improving product quality, patient safety, and data integrity within GMP or as inspiration for other GxP areas..7, 8
这些进步使制药行业处于从决策支持向 GMP 环境中自动化决策的转折点。为确保合规和稳健的系统,此类支持 LLM 的计算机化系统需要针对 LLM 的非确定性本质的风险控制,其中相同的输入可能产生不同的输出,并且可能存在事实性幻觉。基于 ISPE GAMP ® 指南:人工智能,本文提出了一个分层纵深防御框架来降低这些风险。它定义了七个互补的控制层,涵盖输入和输出护栏、领域知识、LLM 选择、监控和可解释性以及透明度。根据预期用途和风险对每一层进行考虑,可以提供一条合规自动化的路径,同时保持或提高 GMP 内产品质量、患者安全和数据完整性,或作为其他 GxP 领域的灵感来源。
Regulatory authorities are actively defining expectations to address AI in GMP environments, with particular focus on trustworthiness.9, 10 Some regulatory authorities advocate risk-based approaches for AI use-case implementation, whereas others are considering prohibiting LLMs in GMP decision-making due to their nondeterministic nature..7, 9 The US Food and Drug Administration has specifically identified hallucinations as a challenge.11 This necessitates full human oversight for LLM subsystems without controls (see Figure 1).
监管机构正积极制定期望来应对 GMP 环境中的 AI,特别关注其可信度。 一些监管机构提倡基于风险的方法来实施 AI 用例,而另一些机构则考虑因 LLMs 的非确定性特性而在 GMP 决策中禁止其使用。美国食品药品监督管理局已特别将幻觉识别为一种挑战。这要求对无控制机制的 LLM 子系统进行全面的人类监督(见图 1)。
AI-enabled computerized systems are already used for automated GMP decision-making use cases but depend primarily on traditional deterministic machine learning or deep learning neural networks. This is seen within computer vision applications with known outcome classifications and structured data sources like automatic visual inspection (AVI). 12.
已使用 AI 赋能的计算机化系统进行自动化 GMP 决策用例,但主要依赖传统的确定性机器学习或深度学习神经网络。这在已知结果分类的计算机视觉应用以及结构化数据源(如自动灯检)中可见。
The ISPE GAMP® Guide on AI helps organizations identify and assess AI subsystem risk by assessing maturity through the lens of adaptiveness (static/dynamic models) and autonomy (decision support and making).6 This framework focuses on AI maturity level 3–5 risks related to static systems with high autonomy. To this end, the five risk control strategies from the ISPE GAMP® AI Guide are considered throughout the layers of the framework. These include elimination, substitution, technical controls, procedural controls, and behavioral controls.6 Additionally, this framework is supported by the concept of defense in depth from information security and LLM development.13 Defense in depth employs multiple protective layers throughout the system’s data flow. By implementing sequential controls at different points, the redundant protective layers compensate for each other’s limitations (see Figure 2).
ISPE GAMP ® 《人工智能指南》帮助组织通过评估适应性(静态/动态模型)和自主性(决策支持和决策)的成熟度来识别和评估人工智能子系统风险。该框架重点关注与具有高自主性的静态系统相关的 3-5 级人工智能成熟度风险。为此,ISPE GAMP ® 《人工智能指南》中的五种风险控制策略在整个框架的各层级中都被考虑。这些包括消除、替代、技术控制、程序控制和行为控制。此外,该框架还得到了信息安全领域和 LLM 开发中纵深防御理念的支持。纵深防御在整个系统的数据流中采用多层保护。通过在不同点实施顺序控制,冗余的保护层可以相互弥补彼此的局限性(见图 2)。
This proposed layered framework consists of seven control layers. These can work together as a complementary risk mitigation strategy depending on the use case and identified risks. The amount and type of controls in each layer should be risk-based, and not all seven layers are required for every use case. The framework layers follow the workflow from input to output using domain knowledge, the selected LLM, and potential fine-tuning, and subsequent monitoring and explainability.
该提出的分层框架由七个控制层组成。这些层可以根据具体应用场景和已识别的风险协同工作,作为一种补充性的风险缓解策略。每一层的控制量和类型应基于风险评估,并非所有七个层都适用于每个应用场景。该框架层按照从输入到输出的工作流程,利用领域知识、选定的 LLM 以及潜在的微调,并随后进行监控和可解释性分析。
Operational Benefits 操作优势
Even though it’s becoming a regulatory expectation, the final implemented LLM-enabled system should equal or reduce false negatives (missed quality issues) compared to previous processes.7 The additional potential benefit is reducing false positives (or unnecessary investigations) that consume quality resources and attention depending on the use case. The implemented system should also aim to equal or reduce assessment variability compared to both LLM systems without controls and traditional human assessments. Other benefits include scalable implementation with centralized control.
尽管这已成为监管预期,但最终实施的 LLM 系统应与以往流程相比,使假阴性(遗漏质量问题)数量相等或减少。 另一个潜在的好处是减少假阳性(或不必要的调查),这些调查会消耗质量资源和注意力,具体取决于使用场景。实施的系统还应旨在与无控制的 LLM 系统和传统人工评估相比,使评估变异性相等或减少。其他好处包括可扩展的实施和集中控制。
Organizations can deploy standardized LLM subsystems across either multiple processes or manufacturing sites and maintain centralized governance over control layers and quality thresholds. However, each new process or site application must assess if it’s within the intended use, including considerations on model language capabilities. This centralized approach promotes scalability and enforces consistent quality standards across geographically distributed operations and may allow for site-specific customizations where appropriate within intended use.
组织可以在多个流程或制造场所部署标准化的 LLM 子系统,并对控制层和品质阈值进行集中管理。然而,每个新的流程或场所应用都必须评估其是否在预期用途范围内,包括考虑模型语言能力。这种集中式方法促进了可扩展性,并在地理分布的运营中强制执行一致的品质标准,并在预期用途范围内允许适当的场所特定定制。
The framework focuses on a single workflow, but larger tasks could be broken into smaller subtasks with more nuanced controls for each subtask workflow, often increasing the performance.14 This could also potentially decrease risk by having more suitable controls for each subtask. The structured monitoring (in layer six) also generates quantifiable performance data that enables systematic improvement of both the LLM-enabled subsystems and the underlying business processes. Finally, these controls transform potentially opaque “black box” systems into more transparent and trustworthy decision-making tools by implementing control techniques commensurate with system risk, regardless of whether the system is used for decision support or decision-making.
该框架专注于单一工作流,但较大的任务可以分解为具有更细致控制的子任务,每个子任务工作流,通常能提高性能。这也有可能通过为每个子任务提供更合适的控制来降低风险。结构化监控(在第六层)还生成了可量化的性能数据,这能够使 LLM 子系统及底层业务流程得到系统化改进。最后,这些控制通过实施与系统风险相匹配的控制技术,无论系统是用于决策支持还是决策制定,都能将潜在的“黑箱”系统转化为更透明和值得信赖的决策工具。
With the increase in capabilities and a layered risk mitigation, automated decision-making use cases within GMP can include:15
随着能力的提升和分层风险缓解,GMP 中的自动化决策用例可以包括:
As an example, complaint handling tasks could range between smaller subtasks like complaint categorization coding to larger and more complex tasks like complaint investigations. These listed areas feature workflows with defined decision pathways and established risk understanding that make automation feasible.
例如,投诉处理任务可能包括从较小的子任务(如投诉分类编码)到较大且更复杂的任务(如投诉调查)。这些列出的领域具有定义明确的决策路径和已建立的风险认知,这使得自动化成为可能。
For each layer, this article proposes suggestions for practical implementations using existing and tried techniques inside and outside of GxP. These are kept at a high-level to allow for future developments, especially as explainability remains an evolving field and is without considerations to costs, compute usage, and duration per workflow. This is because they are constantly evolving and use-case dependent. A workflow illustration with all the layers applied is shown in Figure 3.
针对每一层,本文提出了使用 GxP 内外现有的成熟技术进行实际实施的建议。这些建议保持在高层次上,以便于未来的发展,特别是由于可解释性仍然是一个不断发展的领域,且不考虑成本、计算使用和每个工作流的持续时间。这是因为它们在不断发展和依赖于具体用例。所有层应用的工作流示意图如图 3 所示。
Before the input reaches the model, preprocessing controls could verify input appropriateness. This layer resembles controls of incoming goods ensuring manufacturing materials meet specifications prior to advancing to the next workflow steps. Guardrails is a generic term for detective controls but is used here as a layer to describe enforceable constraints on LLM behavior by analysis of either the input or output.
在输入到达模型之前,预处理控制可以验证输入的适当性。这一层类似于来料控制,确保制造材料在进入下一个工作流程步骤之前符合规格。Guardrails 是一个通用的侦探控制术语,但在这里用作一层,通过分析输入或输出来描述对 LLM 行为的可执行约束。
Input guardrails could cover:
输入约束条件可能包括:
Some use cases may require a more deterministic approach where either the LLM is configured specifically for deterministic responses or with techniques that identify whether an input has been previously processed and, if so, returns the same previous output from an external data source.17, 18 This ensures that identical inputs consistently produce identical outputs but adds complexity during development and deployment of new system versions as previous outputs or configurations could be based on different model training, context, or other variations.
某些用例可能需要更确定性的方法,其中 LLM 被专门配置为确定性响应,或使用技术来识别输入是否已被处理过,如果是,则从外部数据源返回相同的先前输出。 这确保了相同的输入始终产生相同的输出,但在开发部署新系统版本时增加了复杂性,因为先前的输出或配置可能基于不同的模型训练、上下文或其他变化。
Domain knowledge can be introduced to the LLM-enabled system by different techniques going from light interventions like prompt engineering and providing examples to retrieval-augmented generation (RAG). The current primary tool for larger corpora of domain knowledge is using RAG. RAG re-trieves relevant information from external knowledge bases based on the input query, then provides this retrieved context to the LLM during output generation.6 This provides control by anchoring responses in verified information. RAG can reduce hallucination risks through context control, though it does not eliminate them entirely.
领域知识可以通过不同的技术引入到 LLM(大型语言模型)系统中,从轻微干预如提示工程和提供示例到检索增强生成(RAG)。当前用于较大领域知识库的主要工具是使用 RAG。RAG 根据输入查询从外部知识库中检索相关信息,然后在输出生成过程中将检索到的上下文提供给 LLM。这通过将响应锚定在已验证的信息中提供了控制。RAG 可以通过上下文控制来降低幻觉风险,尽管它并不能完全消除这些风险。
The RAG implementation should integrate with fit-for-use data sources and could, for example, contain data sources like:
RAG 实现应与适用数据源集成,例如,可以包含以下数据源:
RAG effectiveness is supported by input guardrails to direct to the correct context of use and ensuring appropriate knowledge bases are used. The database itself should naturally adhere to common data integrity principles including periodic review, version control, and formal change management procedures.8
RAG 的有效性通过输入护栏来支持,以引导到正确的使用上下文,并确保使用适当的知识库。数据库本身应自然遵守常见的数据完整性原则,包括定期审查、版本控制和正式变更管理程序。
Numerous LLM suppliers offer models with varying transparency, performance, and guardrail capabilities. These can include:
众多 LLM 供应商提供具有不同透明度、性能和护栏能力的模型。这些可能包括:
Organizations should thoroughly document these capabilities and limitations when assessing and justifying an LLM supplier and model for fitness of use. Regulatory authorities increasingly expect documentation regarding cloud providers. This potentially also translates to LLM providers, despite the challenges in acquiring detailed information from proprietary model suppliers.8 This transparency gap represents an evolving compliance consideration for manufacturers.
在评估和论证 LLM 供应商和模型适用性时,组织应全面记录这些能力和局限性。监管机构越来越多地要求提供关于云服务提供商的文档。这可能会扩展到 LLM 提供商,尽管从专有模型供应商那里获取详细信息存在挑战。这种透明度差距代表了制造商不断发展的合规考虑因素。
The evaluation process should follow a structured approach like the general supplier good practices. This is further described in the ISPE GAMP® Guide: Artificial Intelligence.6 Organizations can consider using the guide’s risk control strategy of substitution and elimination when choosing a suitable model. This can respectively support explainability with lesser capable models or avoiding models that have potential security exploits.
评估过程应遵循结构化方法,如一般供应商良好实践。这进一步在 ISPE GAMP®指南:人工智能中描述。组织在选择合适的模型时可以考虑使用该指南的风险控制策略替代和消除。这可以分别支持使用能力较弱的模型进行可解释性或避免具有潜在安全漏洞的模型。
Model fine-tuning represents a control layer after LLM supplier and model selection that can increase the probability of outputs consistently meeting predetermined specifications or forms.14, 22 This enables customization of the model for the specific use case and is available for some open source and proprietary models.
模型微调是 LLM 供应商和模型选择之后的控制层,可以增加输出结果始终符合预定规格或形式的概率。这使得模型能够针对特定应用场景进行定制,并且适用于一些开源和专有模型。
Fine-tuning enables predetermined specifications that are case-specific and to define output structures via schemas/coding like complaint categorization coding or certain language use like deviation conclusions. For some intended uses fine-tuning may be necessary and can include:
微调能够实现针对特定案例的预定规格,并通过模式/编码(如投诉分类编码或特定语言使用,如偏差结论)来定义输出结构。对于某些预期用途,微调可能是必要的,可以包括:
The development of use-case-specific “gold standard” datasets require careful considerations and subject matter expertise from different disciplines.7, 23 Different intended uses present varying risk profiles that directly impact model decision-making through decision boundaries and acceptable error rates. Although fine-tuning increases consistency, the data used for fine-tuning should be controlled. This may increase complexity of model life cycle processes.6
开发针对特定用例的“黄金标准”数据集需要跨学科的专业知识和仔细考虑。不同的预期用途呈现不同的风险特征,这些特征通过决策边界和可接受的错误率直接影响模型决策。尽管微调可以提高一致性,但用于微调的数据应受控。这可能增加模型生命周期过程的复杂性。
Fine-tuning approaches vary by deployment model. This is because open source models can be fine-tuned on-premises with greater control of data. However, some cloud-hosted proprietary models offer fine-tuning capabilities via API where training data is processed by the supplier. The approach selected should be documented as part of the system’s risk assessment, with appropriate controls justified.
微调方法因部署模型而异。这是因为开源模型可以在本地进行微调,从而更好地控制数据。然而,一些云托管的专有模型通过 API 提供微调功能,其中训练数据由供应商处理。所选方法应作为系统风险评估的一部分进行记录,并说明适当的控制措施。
Multiple output control methods can support output credibility akin to the multiple analytical methods used in quality control. Output guardrails include:
多种输出控制方法可以支持输出可信度,类似于质量控制中使用的多种分析方法。输出护栏包括:
The LLM-as-a-judge approach is becoming common outside the pharmaceutical industry.26 However, two LLMs can have overlapping vulnerabilities and should only be considered a supplement in GMP processes.
LLM 作为裁判的方法在制药行业以外正变得普遍。然而,两个 LLM 可能存在重叠的漏洞,并且只应被视为 GMP 流程的补充。
Operational LLM subsystems should be monitored using appropriate techniques and metrics, as is expected with traditional machine learning (ML) subsystems..7, 14 At implementation, the system operates with a configuration and performance fit for intended use. As production continues, inputs reflect real-world data and could evolve beyond the initial validation dataset. This is similar to how manufacturing processes expect ongoing monitoring to maintain their validated state. This change can impact model performance through data drift (changes in input distribution).
操作中的 LLM 子系统应使用适当的技术和指标进行监控,正如传统机器学习(ML)子系统所期望的那样。在实施时,系统以适合预期用途的配置和性能运行。随着生产的继续,输入反映了真实世界的数据,并可能超出初始验证数据集。这类似于制造过程预期持续监控以保持其验证状态的方式。这种变化可能通过数据漂移(输入分布的变化)影响模型性能。
The system monitoring layer can provide technical, procedural, and behavioral controls to avoid drift by ensuring performance metrics are tracking against validated baseline performance7, 14 and verifying that considerations on human assessment or intervention depend on the use case. This verification can be accomplished by:
系统监控层可以通过提供技术、流程和行为控制来避免漂移,确保性能指标与经过验证的基线性能进行跟踪,并验证人类评估或干预的考虑是否取决于具体应用场景。这种验证可以通过以下方式完成:
This monitoring approach complements the static controls implemented during system development.
这种监测方法补充了系统开发期间实施的静态控制。
System and model explainability (XAI) contribute to addressing the challenge of interpreting model output as AI-enabled system complexity increases.6 Explainability focuses on providing meaningful explanations with explanation accuracy and knowledge limit for specific LLM outputs. Transparency refers to visibility into the system’s overall architecture and operations like model input, output, and decisions. Both qualities are preferable when decisions affect product quality or patient safety. The audit trail enhances LLM subsystem transparency by logging results from each workflow step and control layer, including data and function requests, to other systems. For some models and use cases, explainability can be supported with emerging methods like:
系统与模型可解释性(XAI)有助于应对 AI 赋能系统复杂度增加时解释模型输出的挑战。第 6 节可解释性专注于为特定 LLM 输出提供具有解释准确性和知识边界的有效解释。透明度是指对系统整体架构和操作的可见性,如模型输入、输出和决策。当决策影响产品质量或患者安全时,这两种品质都更可取。审计追踪通过记录每个工作流步骤和控制层的结果,包括数据与功能请求到其他系统的交互,增强了 LLM 子系统的透明度。对于某些模型和使用场景,可解释性可以通过新兴方法支持,例如:
Quality performance remains the primary objective, with explainability and traceability approaches providing supporting evidence on how conclusions were reached. These methods can be necessary for some use cases as complexity increases. Explainability features and audit trails enable later reviews of existing controls to verify adequacy and support continual improvement. The approach aligns with GMP requirements and data integrity principles.
质量表现仍然是主要目标,可解释性和可追溯性方法为结论是如何得出的提供了支持性证据。随着复杂性的增加,这些方法对于某些用例可能是必要的。可解释性功能和审计追踪能够对现有控制进行后期审查,以验证其充分性并支持持续改进。该方法符合 GMP 要求和数据完整性原则。
The pharmaceutical industry stands at a transition point where LLM can extend beyond decision support toward automated decision-making, leveraging improved capabilities in new use cases. Their nondeterministic nature introduces risks such as overreliance, hallucinations, and limited explainability. However, these can be assessed and mitigated through a layered, GAMP-aligned risk control approach like guardrails, LLM selection, fine-tuning, system domain knowledge, monitoring and explainability. This opens opportunities for pharmaceutical manufacturers to develop LLM-enabled computerized systems for automated decision-making, where appropriate risk controls ensure such systems maintain or improve upon current processes to advance product quality, patient safety, and data integrity.
制药行业正站在一个转折点上,LLM 可以超越决策支持,迈向自动化决策,利用其在新用例中的增强能力。它们的非确定性本质引入了过度依赖、幻觉和可解释性有限等风险。然而,这些风险可以通过与 GAMP 一致的分层风险控制方法(如护栏、LLM 选择、微调、系统领域知识、监控和可解释性)进行评估和缓解。这为制药制造商开发支持自动化决策的 LLM 计算机化系统提供了机会,其中适当的风险控制确保此类系统能够维持或改进现有流程,以提升产品质量、患者安全和数据完整性。
邵丽竹
何发
全球领先的合同研究、开发和生产(CRDMO)服务公司药明生物(WuXi Biologics, 2269.HK)今日宣布正式推出行业领先的数字孪生平台 PatroLab™,重塑生物工艺开发与生产范式。
2026-01-12 药明生物
为验证生物制药生产中两台 300 L 独立不锈钢混合罐的灭菌工艺稳定性及无菌保证效果,采用山东新华 XG1.D 型脉动真空高压蒸汽灭菌器,设定脉动抽真空 3 次、真空限度 -80 kPa、121℃灭菌30 min 的工艺参数,开展三次独立验证试验。
2026-01-12 高启涛 赵世煊 张宏 陈刚 杨勇
2025-12-12
2026-02-04
2025-12-22
2025-12-25
2025-12-16
2025-12-10
2025-12-04
本文以某制药产线的灌装机设备为研究对象,采用计算流体动力学(CFD)仿真技术对充氮装置的充氮性能进行分析,并结合分析结果对氮幕结构进行了优化设计。随后,针对优化方案进行性能仿真验证,结果显示优化后的顶空残氧量降低至0.252%。为了进一步验证优化方案的实际效果,将优化方案应用于实际产线进行性能测试,测得的顶空残氧量为0.68%,这一结果满足了小于1%的要求,表明其充氮保护性能已达到国际先进水平。
作者:王志刚、刘依宽、刘佳鑫
评论
加载更多