在 Maya 中使用 Python 进行脚本编写和工具开发是 CG 制作管线中不可或缺的一部分。然而,当处理复杂场景、大量几何体或需要执行密集计算时,性能问题往往会成为瓶颈。一个运行缓慢的脚本不仅会降低艺术家的工作效率,还可能影响整个生产进度。
本指南将深入探讨 Maya Python 性能优化的各个方面,从 API 选择到具体的优化技术,帮助你编写更快、更高效的 Maya 工具。
为什么性能优化很重要
- 提高生产效率:快速响应的工具让艺术家能够专注于创作,而不是等待脚本执行
- 处理复杂场景:随着场景复杂度的增加,未优化的代码可能从几秒钟变成几分钟甚至几小时
- 用户体验:响应迅速的工具会让用户更愿意使用,而缓慢的工具会被弃用
- 可扩展性:优化良好的代码更容易适应未来更大规模的项目需求
性能优化的基本原则
在深入具体技术之前,让我们先了解几个基本原则:
- 测量优先:在优化之前先测量,找到真正的瓶颈
- 80/20 法则:通常 80% 的时间花费在 20% 的代码上,优先优化这些热点
- 可读性权衡:不要为了微小的性能提升牺牲代码的可读性和可维护性
- “足够好”原则:如果脚本在 0.5 秒内完成,可能不值得花费时间优化到 0.3 秒
1. Maya Python API 选择:cmds vs OpenMaya vs API 2.0
Maya 提供了三种主要的 Python 接口,它们在易用性和性能之间有不同的权衡。
1.1 三种 API 的性能排序
从慢到快的性能排序:
PyMEL (最慢) > maya.cmds (中等) > maya.OpenMaya (API 1.0,快) > maya.api.OpenMaya (API 2.0,最快)
1.2 maya.cmds
优点:
- 学习曲线低,接近 MEL 语法
- 文档完善,社区资源丰富
- 代码简洁易读
- 自动处理撤销队列
缺点:
- 性能相对较慢
- 每次调用都需要字符串解析和对象查找
- 大量调用时开销累积明显
适用场景:
- UI 交互和简单操作
- 不频繁调用的操作
- 快速原型开发
- 运行时间已经”足够好”的脚本
示例:
import maya.cmds as cmds
# 简单但在循环中会很慢
for i in range(1000):
pos = cmds.xform(f"obj_{i}", query=True, worldSpace=True, translation=True)1.3 maya.OpenMaya (API 1.0)
优点:
- 比 cmds 快得多
- 直接访问 Maya 内部数据结构
- 批量操作能力
缺点:
- 学习曲线陡峭
- 需要使用 MScriptUtil 处理某些数据类型
- 代码较为冗长
- 不自动处理撤销(需要 MPxCommand)
适用场景:
- 遗留代码维护
- 当你已经熟悉 API 1.0 时
注意: Maya API 1.0 已经基本被 API 2.0 取代,新项目建议直接使用 API 2.0。
1.4 maya.api.OpenMaya (API 2.0) - 推荐
优点:
- 性能最佳
- Python 风格的 API,比 API 1.0 更易用
- 不需要 MScriptUtil
- 支持 Python 列表和元组
- 类型检查更严格,减少运行时错误
缺点:
- 学习曲线中等
- 代码量通常比 cmds 多
- 不自动处理撤销
适用场景:
- 性能关键的操作
- 大量几何体处理
- 批量数据访问
- 生产环境的核心工具
示例对比:
# ========== 使用 maya.cmds (慢) ==========
import maya.cmds as cmds
import time
start = time.time()
positions = []
for i in range(5000):
pos = cmds.xform(f"pSphere1.vtx[{i}]", query=True, worldSpace=True, translation=True)
positions.append(pos)
print(f"cmds 耗时: {time.time() - start:.2f}秒")
# ========== 使用 maya.api.OpenMaya (快) ==========
import maya.api.OpenMaya as om
import time
start = time.time()
# 获取 mesh
sel = om.MSelectionList()
sel.add("pSphere1")
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 一次性获取所有顶点位置
positions = mesh_fn.getPoints(om.MSpace.kWorld)
print(f"API 2.0 耗时: {time.time() - start:.2f}秒")性能差异: 在上述示例中,API 2.0 通常比 cmds 快 50-100 倍。
1.5 混合使用策略
最佳实践是根据场景混合使用不同的 API:
import maya.cmds as cmds
import maya.api.OpenMaya as om
def optimize_skincluster_weights(mesh_name, skin_cluster):
"""
优化蒙皮权重的混合策略示例
"""
# 使用 cmds 进行简单的查询和 UI 操作
if not cmds.objExists(mesh_name):
cmds.warning(f"{mesh_name} 不存在")
return
# 使用 API 2.0 进行密集的数据处理
sel = om.MSelectionList()
sel.add(skin_cluster)
sel.add(mesh_name)
skin_obj = sel.getDependNode(0)
mesh_dag = sel.getDagPath(1)
skin_fn = om.MFnSkinCluster(skin_obj)
# 批量获取权重
mesh_fn = om.MFnMesh(mesh_dag)
comp_fn = om.MFnSingleIndexedComponent()
vert_comp = comp_fn.create(om.MFn.kMeshVertComponent)
comp_fn.setCompleteData(mesh_fn.numVertices)
weights, num_influences = skin_fn.getWeights(mesh_dag, vert_comp)
# ... 处理权重数据 ...
# 使用 cmds 创建 UI 反馈
cmds.confirmDialog(title="完成", message=f"处理了 {mesh_fn.numVertices} 个顶点")2. 批处理操作和上下文管理
2.1 暂停视口刷新
视口刷新是 Maya 中最昂贵的操作之一。在批量操作时暂停刷新可以带来显著的性能提升。
import maya.cmds as cmds
# ========== 不推荐:每次操作都刷新视口 ==========
def create_cubes_slow(count):
for i in range(count):
cmds.polyCube(name=f"cube_{i}")
cmds.move(i * 2, 0, 0)
# ========== 推荐:暂停视口刷新 ==========
def create_cubes_fast(count):
cmds.refresh(suspend=True)
try:
for i in range(count):
cmds.polyCube(name=f"cube_{i}")
cmds.move(i * 2, 0, 0)
finally:
cmds.refresh(suspend=False)
cmds.refresh() # 强制刷新一次
# 使用上下文管理器(更优雅)
from contextlib import contextmanager
@contextmanager
def suspend_refresh():
"""暂停视口刷新的上下文管理器"""
cmds.refresh(suspend=True)
try:
yield
finally:
cmds.refresh(suspend=False)
cmds.refresh()
# 使用方式
with suspend_refresh():
for i in range(1000):
cmds.polyCube(name=f"cube_{i}")性能提升: 对于创建 1000 个立方体,这可以带来 10-50 倍的性能提升。
2.2 管理撤销队列
撤销队列管理也会消耗大量资源。对于不需要撤销的批量操作,可以暂时禁用它。
import maya.cmds as cmds
# ========== 不推荐:每次操作都记录撤销 ==========
def batch_operation_slow():
for i in range(10000):
cmds.setAttr(f"obj_{i}.translateX", i * 0.1)
# ========== 推荐:批量操作作为单个撤销块 ==========
def batch_operation_chunk():
cmds.undoInfo(openChunk=True)
try:
for i in range(10000):
cmds.setAttr(f"obj_{i}.translateX", i * 0.1)
finally:
cmds.undoInfo(closeChunk=True)
# ========== 更激进:完全暂停撤销 ==========
def batch_operation_no_undo():
cmds.undoInfo(stateWithoutFlush=False)
try:
for i in range(10000):
cmds.setAttr(f"obj_{i}.translateX", i * 0.1)
finally:
cmds.undoInfo(stateWithoutFlush=True)
# 上下文管理器版本
@contextmanager
def suspend_undo():
"""暂停撤销的上下文管理器"""
cmds.undoInfo(stateWithoutFlush=False)
try:
yield
finally:
cmds.undoInfo(stateWithoutFlush=True)
# 单个撤销块的上下文管理器
@contextmanager
def undo_chunk(chunk_name="operation"):
"""将操作合并为单个撤销块"""
cmds.undoInfo(openChunk=True, chunkName=chunk_name)
try:
yield
finally:
cmds.undoInfo(closeChunk=True)
# 使用方式
with suspend_undo():
# 这些操作不会被记录到撤销队列
for i in range(10000):
cmds.setAttr(f"obj_{i}.translateX", i * 0.1)2.3 综合的批处理上下文管理器
将多个优化技术结合到一个上下文管理器中:
from contextlib import contextmanager
import maya.cmds as cmds
@contextmanager
def batch_context(suspend_refresh=True, suspend_undo=True, restore_selection=True):
"""
批处理操作的综合上下文管理器
Args:
suspend_refresh: 是否暂停视口刷新
suspend_undo: 是否暂停撤销队列
restore_selection: 是否恢复操作前的选择
"""
# 保存当前状态
prev_selection = cmds.ls(selection=True) if restore_selection else []
# 暂停刷新
if suspend_refresh:
cmds.refresh(suspend=True)
# 暂停撤销
if suspend_undo:
cmds.undoInfo(stateWithoutFlush=False)
try:
yield
finally:
# 恢复撤销
if suspend_undo:
cmds.undoInfo(stateWithoutFlush=True)
# 恢复刷新
if suspend_refresh:
cmds.refresh(suspend=False)
cmds.refresh()
# 恢复选择
if restore_selection and prev_selection:
cmds.select(prev_selection, replace=True)
# 使用示例
with batch_context():
for i in range(5000):
cmds.polyCube(name=f"cube_{i}")
cmds.move(i * 2, 0, 0)3. 迭代器和批量数据访问
3.1 避免逐元素访问
在处理几何体时,逐个访问顶点、面或边是性能杀手。
import maya.api.OpenMaya as om
import maya.cmds as cmds
# ========== 慢:逐顶点访问 ==========
def get_vertex_positions_slow(mesh_name):
positions = []
vertex_count = cmds.polyEvaluate(mesh_name, vertex=True)
for i in range(vertex_count):
pos = cmds.xform(f"{mesh_name}.vtx[{i}]", query=True, worldSpace=True, translation=True)
positions.append(pos)
return positions
# ========== 快:批量访问 ==========
def get_vertex_positions_fast(mesh_name):
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 一次性获取所有顶点位置
positions = mesh_fn.getPoints(om.MSpace.kWorld)
return positions3.2 使用迭代器
当确实需要遍历组件时,使用 API 迭代器比 cmds 快得多。
import maya.api.OpenMaya as om
def iterate_vertices_api(mesh_name):
"""使用 API 2.0 迭代器遍历顶点"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
# 创建顶点迭代器
vert_iter = om.MItMeshVertex(dag_path)
results = []
while not vert_iter.isDone():
# 获取当前顶点信息
vertex_id = vert_iter.index()
position = vert_iter.position(om.MSpace.kWorld)
normal = vert_iter.getNormal()
results.append({
'id': vertex_id,
'position': position,
'normal': normal
})
vert_iter.next()
return results
def iterate_edges_api(mesh_name):
"""使用 API 2.0 迭代器遍历边"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
edge_iter = om.MItMeshEdge(dag_path)
edge_lengths = []
while not edge_iter.isDone():
length = edge_iter.length()
edge_lengths.append(length)
edge_iter.next()
return edge_lengths3.3 批量设置属性
import maya.api.OpenMaya as om
import maya.cmds as cmds
# ========== 慢:逐顶点设置位置 ==========
def set_positions_slow(mesh_name, new_positions):
for i, pos in enumerate(new_positions):
cmds.xform(f"{mesh_name}.vtx[{i}]", worldSpace=True, translation=pos)
# ========== 快:批量设置位置 ==========
def set_positions_fast(mesh_name, new_positions):
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 将 Python 列表转换为 MPointArray
point_array = om.MPointArray(new_positions)
# 一次性设置所有顶点位置
mesh_fn.setPoints(point_array, om.MSpace.kWorld)4. 蒙皮权重操作优化
蒙皮权重操作是常见的性能瓶颈。让我们看一个完整的优化示例。
4.1 获取蒙皮权重
import maya.cmds as cmds
import maya.api.OpenMaya as om
import maya.api.OpenMayaAnim as oma
# ========== 慢:使用 cmds.skinPercent 逐顶点获取 ==========
def get_skin_weights_slow(mesh_name, skin_cluster):
"""
对于 10,000 顶点的模型,可能需要 30-60 秒
"""
joints = cmds.skinCluster(skin_cluster, query=True, influence=True)
vertex_count = cmds.polyEvaluate(mesh_name, vertex=True)
all_weights = []
for i in range(vertex_count):
vtx = f"{mesh_name}.vtx[{i}]"
weights = {}
for jnt in joints:
weight = cmds.skinPercent(skin_cluster, vtx, query=True, transform=jnt)
weights[jnt] = weight
all_weights.append(weights)
return all_weights
# ========== 快:使用 API 2.0 批量获取 ==========
def get_skin_weights_fast(mesh_name, skin_cluster):
"""
对于 10,000 顶点的模型,通常只需要 0.1-0.5 秒
性能提升:100-300 倍
"""
sel = om.MSelectionList()
sel.add(skin_cluster)
sel.add(mesh_name)
skin_obj = sel.getDependNode(0)
mesh_dag = sel.getDagPath(1)
skin_fn = oma.MFnSkinCluster(skin_obj)
# 创建包含所有顶点的组件
mesh_fn = om.MFnMesh(mesh_dag)
vert_count = mesh_fn.numVertices
comp_fn = om.MFnSingleIndexedComponent()
vert_comp = comp_fn.create(om.MFn.kMeshVertComponent)
comp_fn.setCompleteData(vert_count)
# 批量获取权重
weights, num_influences = skin_fn.getWeights(mesh_dag, vert_comp)
# 将一维数组转换为二维结构
weights_2d = []
for i in range(vert_count):
start_idx = i * num_influences
end_idx = start_idx + num_influences
vert_weights = list(weights[start_idx:end_idx])
weights_2d.append(vert_weights)
# 获取影响对象名称
influence_objects = skin_fn.influenceObjects()
influence_names = [om.MFnDagNode(obj).partialPathName() for obj in influence_objects]
return weights_2d, influence_names4.2 设置蒙皮权重
# ========== 慢:使用 cmds.skinPercent 逐顶点设置 ==========
def set_skin_weights_slow(mesh_name, skin_cluster, weights_data):
for i, weights in enumerate(weights_data):
vtx = f"{mesh_name}.vtx[{i}]"
transform_values = [(jnt, weight) for jnt, weight in weights.items()]
cmds.skinPercent(skin_cluster, vtx, transformValue=transform_values)
# ========== 快:使用 API 2.0 批量设置 ==========
def set_skin_weights_fast(mesh_name, skin_cluster, weights_2d, influence_names=None):
"""
weights_2d: 二维权重数组 [[w1, w2, w3...], [w1, w2, w3...], ...]
influence_names: 影响对象名称列表(可选)
"""
sel = om.MSelectionList()
sel.add(skin_cluster)
sel.add(mesh_name)
skin_obj = sel.getDependNode(0)
mesh_dag = sel.getDagPath(1)
skin_fn = oma.MFnSkinCluster(skin_obj)
# 创建组件
mesh_fn = om.MFnMesh(mesh_dag)
comp_fn = om.MFnSingleIndexedComponent()
vert_comp = comp_fn.create(om.MFn.kMeshVertComponent)
comp_fn.setCompleteData(mesh_fn.numVertices)
# 将二维数组展平为一维
flat_weights = []
for vert_weights in weights_2d:
flat_weights.extend(vert_weights)
# 转换为 MDoubleArray
weight_array = om.MDoubleArray(flat_weights)
# 批量设置权重
skin_fn.setWeights(mesh_dag, vert_comp, om.MIntArray(), weight_array, normalize=True)5. 选择和查询优化
5.1 缓存查询结果
import maya.cmds as cmds
# ========== 不推荐:重复查询 ==========
def process_meshes_slow():
all_meshes = cmds.ls(type='mesh')
for mesh in all_meshes:
if cmds.objExists(mesh): # 重复查询
if cmds.nodeType(mesh) == 'mesh': # 已经知道是 mesh 了
parent = cmds.listRelatives(mesh, parent=True)[0]
# ... 更多操作
# ========== 推荐:缓存结果 ==========
def process_meshes_fast():
all_meshes = cmds.ls(type='mesh', long=True) # 使用长名称避免歧义
for mesh in all_meshes:
# 已经通过 ls 查询到,不需要再检查存在
parent = cmds.listRelatives(mesh, parent=True, fullPath=True)
if not parent:
continue
parent = parent[0]
# ... 更多操作5.2 使用精确的查询标志
import maya.cmds as cmds
# ========== 不推荐:获取所有信息 ==========
def get_transform_info_slow(obj):
# cmds.xform 默认会查询大量信息
all_info = cmds.xform(obj, query=True)
translation = cmds.xform(obj, query=True, worldSpace=True, translation=True)
rotation = cmds.xform(obj, query=True, worldSpace=True, rotation=True)
scale = cmds.xform(obj, query=True, worldSpace=True, scale=True)
return translation, rotation, scale
# ========== 推荐:精确查询 ==========
def get_transform_info_fast(obj):
# 只查询需要的信息,一次性查询
translation = cmds.xform(obj, q=True, ws=True, t=True)
rotation = cmds.xform(obj, q=True, ws=True, ro=True)
scale = cmds.xform(obj, q=True, ws=True, s=True)
return translation, rotation, scale5.3 使用列表编辑而非多次 select
import maya.cmds as cmds
# ========== 不推荐:多次选择操作 ==========
def select_objects_slow(objects):
cmds.select(clear=True)
for obj in objects:
cmds.select(obj, add=True)
# ========== 推荐:一次性选择 ==========
def select_objects_fast(objects):
if objects:
cmds.select(objects, replace=True)
else:
cmds.select(clear=True)6. 数据结构和算法优化
6.1 使用合适的数据结构
# ========== 不推荐:使用列表进行成员检查 ==========
def find_duplicates_slow(objects):
seen = [] # 列表的 'in' 操作是 O(n)
duplicates = []
for obj in objects:
if obj in seen: # 每次都要遍历整个列表
duplicates.append(obj)
else:
seen.append(obj)
return duplicates
# ========== 推荐:使用集合 ==========
def find_duplicates_fast(objects):
seen = set() # 集合的 'in' 操作是 O(1)
duplicates = []
for obj in objects:
if obj in seen:
duplicates.append(obj)
else:
seen.add(obj)
return duplicates
# ========== 更好:使用 Counter ==========
from collections import Counter
def find_duplicates_best(objects):
counts = Counter(objects)
return [obj for obj, count in counts.items() if count > 1]6.2 避免字符串拼接
import maya.cmds as cmds
# ========== 不推荐:循环中的字符串拼接 ==========
def create_hierarchy_slow(count):
for i in range(count):
# 字符串格式化在 Python 2.x 中相对较慢
name = "obj_" + str(i)
cmds.createNode("transform", name=name)
# ========== 推荐:使用 f-string (Python 3.6+) ==========
def create_hierarchy_fast(count):
for i in range(count):
name = f"obj_{i}"
cmds.createNode("transform", name=name)
# ========== 或者使用 .format() ==========
def create_hierarchy_format(count):
for i in range(count):
name = "obj_{}".format(i)
cmds.createNode("transform", name=name)6.3 预分配和列表推导
import maya.cmds as cmds
# ========== 较慢:append 到列表 ==========
def collect_positions_slow(objects):
positions = []
for obj in objects:
pos = cmds.xform(obj, q=True, ws=True, t=True)
positions.append(pos)
return positions
# ========== 较快:列表推导 ==========
def collect_positions_fast(objects):
return [cmds.xform(obj, q=True, ws=True, t=True) for obj in objects]
# ========== 当需要条件过滤时 ==========
def collect_visible_positions(objects):
return [
cmds.xform(obj, q=True, ws=True, t=True)
for obj in objects
if cmds.getAttr(f"{obj}.visibility")
]7. 性能分析和调试工具
7.1 使用 timeit 进行简单计时
import timeit
import maya.cmds as cmds
# 测试单次执行时间
def test_function():
cmds.polyCube()
cmds.delete(cmds.ls(type='polyCube'))
# 执行 100 次并计算平均时间
time_taken = timeit.timeit(test_function, number=100)
print(f"平均执行时间: {time_taken/100:.4f} 秒")7.2 使用上下文管理器计时
import time
from contextlib import contextmanager
@contextmanager
def timer(name="操作"):
"""简单的计时上下文管理器"""
start = time.time()
yield
elapsed = time.time() - start
print(f"{name} 耗时: {elapsed:.4f} 秒")
# 使用方式
with timer("创建 1000 个立方体"):
for i in range(1000):
cmds.polyCube()7.3 使用 cProfile 进行详细分析
import cProfile
import pstats
import io
def profile_function(func, *args, **kwargs):
"""
分析函数的性能
"""
profiler = cProfile.Profile()
profiler.enable()
result = func(*args, **kwargs)
profiler.disable()
# 输出统计信息
s = io.StringIO
```python
import cProfile
import pstats
import io
def profile_function(func, *args, **kwargs):
"""
分析函数的性能
"""
profiler = cProfile.Profile()
profiler.enable()
result = func(*args, **kwargs)
profiler.disable()
# 输出统计信息
s = io.StringIO()
ps = pstats.Stats(profiler, stream=s)
ps.sort_stats('cumulative') # 按累计时间排序
ps.print_stats(20) # 打印前 20 个最慢的函数
print(s.getvalue())
return result
# 使用示例
def my_slow_function():
import maya.cmds as cmds
for i in range(1000):
cmds.polyCube()
cmds.move(i, 0, 0)
profile_function(my_slow_function)7.4 使用装饰器进行性能监控
import time
import functools
def performance_monitor(func):
"""
装饰器:监控函数执行时间
"""
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
elapsed = time.time() - start
print(f"{func.__name__} 执行时间: {elapsed:.4f} 秒")
return result
return wrapper
# 使用方式
@performance_monitor
def create_many_cubes(count):
import maya.cmds as cmds
for i in range(count):
cmds.polyCube(name=f"cube_{i}")
create_many_cubes(100)
# 输出: create_many_cubes 执行时间: 2.3456 秒7.5 Maya Profiler
Maya 内置了一个强大的分析器,可以分析 Python 和 MEL 代码的性能。
import maya.cmds as cmds
# 启动 Maya Profiler
cmds.profiler(sampling=True)
# 执行要分析的代码
for i in range(1000):
cmds.polyCube()
# 停止 profiler 并保存结果
cmds.profiler(sampling=False)
# 在 Maya 中打开 Profiler 窗口查看结果
# Windows > General Editors > Profiler8. NumPy 和向量化操作
当处理大量数值计算时,NumPy 可以提供显著的性能提升。
8.1 安装和导入 NumPy
# 在 Maya 的 Python 环境中安装 NumPy (如果没有)
# mayapy -m pip install numpy
import numpy as np
import maya.api.OpenMaya as om8.2 使用 NumPy 进行批量计算
import maya.api.OpenMaya as om
import maya.cmds as cmds
import numpy as np
# ========== 不使用 NumPy(较慢)==========
def offset_vertices_slow(mesh_name, offset):
"""将所有顶点沿法线方向偏移"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
positions = mesh_fn.getPoints(om.MSpace.kWorld)
normals = mesh_fn.getVertexNormals(False, om.MSpace.kWorld)
new_positions = []
for i, pos in enumerate(positions):
normal = normals[i]
new_pos = om.MPoint(
pos.x + normal.x * offset,
pos.y + normal.y * offset,
pos.z + normal.z * offset
)
new_positions.append(new_pos)
mesh_fn.setPoints(new_positions, om.MSpace.kWorld)
# ========== 使用 NumPy(快得多)==========
def offset_vertices_fast(mesh_name, offset):
"""使用 NumPy 向量化操作"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 获取数据
positions = mesh_fn.getPoints(om.MSpace.kWorld)
normals = mesh_fn.getVertexNormals(False, om.MSpace.kWorld)
# 转换为 NumPy 数组
pos_array = np.array([[p.x, p.y, p.z] for p in positions])
norm_array = np.array([[n.x, n.y, n.z] for n in normals])
# 向量化操作(快得多)
new_pos_array = pos_array + norm_array * offset
# 转换回 MPointArray
new_positions = om.MPointArray([
om.MPoint(p[0], p[1], p[2])
for p in new_pos_array
])
mesh_fn.setPoints(new_positions, om.MSpace.kWorld)8.3 NumPy 在权重处理中的应用
import numpy as np
import maya.api.OpenMaya as om
import maya.api.OpenMayaAnim as oma
def normalize_weights_numpy(weights_2d):
"""
使用 NumPy 归一化蒙皮权重
Args:
weights_2d: 二维权重数组 [[w1, w2, w3...], ...]
Returns:
归一化后的权重数组
"""
# 转换为 NumPy 数组
weights_np = np.array(weights_2d)
# 计算每行的和
row_sums = weights_np.sum(axis=1, keepdims=True)
# 避免除以零
row_sums[row_sums == 0] = 1.0
# 归一化
normalized = weights_np / row_sums
return normalized.tolist()
def smooth_weights_numpy(weights_2d, iterations=1):
"""
使用 NumPy 平滑权重
"""
weights_np = np.array(weights_2d)
for _ in range(iterations):
# 简单的平滑算法:每个顶点的权重与邻居权重的平均
smoothed = np.copy(weights_np)
for i in range(1, len(weights_np) - 1):
smoothed[i] = (weights_np[i-1] + weights_np[i] + weights_np[i+1]) / 3.0
weights_np = smoothed
return weights_np.tolist()9. 并行处理和多线程
Maya 的 Python 解释器受 GIL(Global Interpreter Lock)限制,但某些操作仍可以从多线程中受益。
9.1 使用 concurrent.futures 进行并行处理
from concurrent.futures import ThreadPoolExecutor, as_completed
import maya.cmds as cmds
import maya.utils as utils
def process_mesh(mesh_name):
"""处理单个 mesh 的函数"""
# 必须在主线程执行的 Maya 命令
def maya_operation():
vert_count = cmds.polyEvaluate(mesh_name, vertex=True)
# ... 更多操作
return vert_count
# 使用 maya.utils.executeInMainThreadWithResult 确保在主线程执行
result = utils.executeInMainThreadWithResult(maya_operation)
return mesh_name, result
def process_all_meshes_parallel(mesh_list, max_workers=4):
"""
并行处理多个 meshes
注意:由于 Maya 的线程限制,实际性能提升可能有限
"""
results = {}
with ThreadPoolExecutor(max_workers=max_workers) as executor:
# 提交所有任务
future_to_mesh = {
executor.submit(process_mesh, mesh): mesh
for mesh in mesh_list
}
# 收集结果
for future in as_completed(future_to_mesh):
mesh_name, result = future.result()
results[mesh_name] = result
return results9.2 使用多进程(适用于独立计算)
from multiprocessing import Pool
import numpy as np
def compute_intensive_task(data):
"""
CPU 密集型任务,不涉及 Maya API 调用
例如:复杂的数学计算、数据处理等
"""
# 使用 NumPy 进行计算
result = np.sqrt(np.sum(np.array(data) ** 2))
return result
def parallel_compute(data_list, processes=4):
"""
使用多进程进行并行计算
注意:只适用于不需要访问 Maya 场景的纯计算任务
"""
with Pool(processes=processes) as pool:
results = pool.map(compute_intensive_task, data_list)
return results
# 使用示例
data_sets = [
[1, 2, 3, 4, 5] * 1000,
[6, 7, 8, 9, 10] * 1000,
[11, 12, 13, 14, 15] * 1000,
]
results = parallel_compute(data_sets)10. 缓存和懒加载
10.1 使用 functools.lru_cache 缓存结果
from functools import lru_cache
import maya.cmds as cmds
# ========== 不推荐:重复计算 ==========
def get_mesh_info(mesh_name):
"""每次都重新计算"""
vert_count = cmds.polyEvaluate(mesh_name, vertex=True)
face_count = cmds.polyEvaluate(mesh_name, face=True)
return vert_count, face_count
# 如果在循环中多次调用同一个 mesh,会重复计算
for i in range(100):
info = get_mesh_info("pSphere1") # 每次都重新计算
# ========== 推荐:使用缓存 ==========
@lru_cache(maxsize=128)
def get_mesh_info_cached(mesh_name):
"""结果会被缓存"""
vert_count = cmds.polyEvaluate(mesh_name, vertex=True)
face_count = cmds.polyEvaluate(mesh_name, face=True)
return vert_count, face_count
# 第一次调用会计算,后续调用直接返回缓存结果
for i in range(100):
info = get_mesh_info_cached("pSphere1") # 只有第一次会计算10.2 手动缓存管理
class MeshCache:
"""
手动管理 mesh 信息缓存
"""
def __init__(self):
self._cache = {}
def get_mesh_info(self, mesh_name, force_refresh=False):
"""
获取 mesh 信息,使用缓存
Args:
mesh_name: mesh 名称
force_refresh: 是否强制刷新缓存
"""
if force_refresh or mesh_name not in self._cache:
# 计算并缓存
vert_count = cmds.polyEvaluate(mesh_name, vertex=True)
face_count = cmds.polyEvaluate(mesh_name, face=True)
edge_count = cmds.polyEvaluate(mesh_name, edge=True)
self._cache[mesh_name] = {
'vertices': vert_count,
'faces': face_count,
'edges': edge_count
}
return self._cache[mesh_name]
def clear_cache(self, mesh_name=None):
"""清除缓存"""
if mesh_name:
self._cache.pop(mesh_name, None)
else:
self._cache.clear()
# 使用示例
cache = MeshCache()
# 第一次获取会计算
info1 = cache.get_mesh_info("pSphere1")
# 后续获取使用缓存
info2 = cache.get_mesh_info("pSphere1")
# 强制刷新
info3 = cache.get_mesh_info("pSphere1", force_refresh=True)10.3 懒加载模式
class LazyMeshData:
"""
懒加载 mesh 数据
只有在实际访问时才计算
"""
def __init__(self, mesh_name):
self.mesh_name = mesh_name
self._positions = None
self._normals = None
self._uvs = None
@property
def positions(self):
"""懒加载顶点位置"""
if self._positions is None:
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add(self.mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
self._positions = mesh_fn.getPoints(om.MSpace.kWorld)
return self._positions
@property
def normals(self):
"""懒加载顶点法线"""
if self._normals is None:
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add(self.mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
self._normals = mesh_fn.getVertexNormals(False, om.MSpace.kWorld)
return self._normals
@property
def uvs(self):
"""懒加载 UV 坐标"""
if self._uvs is None:
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add(self.mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
self._uvs = mesh_fn.getUVs()
return self._uvs
def clear_cache(self):
"""清除所有缓存数据"""
self._positions = None
self._normals = None
self._uvs = None
# 使用示例
mesh_data = LazyMeshData("pSphere1")
# 只有在访问时才会计算
positions = mesh_data.positions # 这时才计算位置
# 再次访问使用缓存
positions_again = mesh_data.positions # 直接返回缓存11. 避免常见的性能陷阱
11.1 避免在循环中进行不必要的查询
import maya.cmds as cmds
# ========== 不推荐 ==========
def bad_example():
meshes = cmds.ls(type='mesh')
for mesh in meshes:
# 每次都查询父节点
parent = cmds.listRelatives(mesh, parent=True)
if parent:
# 每次都查询类型
if cmds.nodeType(parent[0]) == 'transform':
# 每次都查询可见性
if cmds.getAttr(f"{parent[0]}.visibility"):
print(f"{parent[0]} 是可见的")
# ========== 推荐 ==========
def good_example():
meshes = cmds.ls(type='mesh', long=True)
# 批量查询所有需要的信息
mesh_data = []
for mesh in meshes:
parent = cmds.listRelatives(mesh, parent=True, fullPath=True)
if not parent:
continue
parent = parent[0]
# 一次性查询所有需要的属性
is_visible = cmds.getAttr(f"{parent}.visibility")
mesh_data.append({
'mesh': mesh,
'parent': parent,
'visible': is_visible
})
# 使用预处理的数据
for data in mesh_data:
if data['visible']:
print(f"{data['parent']} 是可见的")11.2 避免过度使用 getAttr/setAttr
import maya.cmds as cmds
# ========== 不推荐:逐个属性设置 ==========
def set_transform_slow(obj, tx, ty, tz, rx, ry, rz):
cmds.setAttr(f"{obj}.translateX", tx)
cmds.setAttr(f"{obj}.translateY", ty)
cmds.setAttr(f"{obj}.translateZ", tz)
cmds.setAttr(f"{obj}.rotateX", rx)
cmds.setAttr(f"{obj}.rotateY", ry)
cmds.setAttr(f"{obj}.rotateZ", rz)
# ========== 推荐:批量设置 ==========
def set_transform_fast(obj, tx, ty, tz, rx, ry, rz):
cmds.setAttr(f"{obj}.translate", tx, ty, tz, type='double3')
cmds.setAttr(f"{obj}.rotate", rx, ry, rz, type='double3')
# ========== 更好:使用 xform ==========
def set_transform_better(obj, tx, ty, tz, rx, ry, rz):
cmds.xform(obj, translation=(tx, ty, tz), rotation=(rx, ry, rz))11.3 避免不必要的字符串操作
import maya.cmds as cmds
# ========== 不推荐:频繁的字符串格式化 ==========
def query_attributes_slow(obj, indices):
results = []
for i in indices:
# 每次都构造属性名称字符串
attr_name = f"{obj}.vtx[{i}]"
value = cmds.getAttr(attr_name)
results.append(value)
return results
# ========== 推荐:减少字符串操作 ==========
def query_attributes_fast(obj, indices):
# 如果可能,使用批量操作
# 或者至少预构造字符串
attr_template = f"{obj}.vtx[{{}}]"
results = []
for i in indices:
value = cmds.getAttr(attr_template.format(i))
results.append(value)
return results
# ========== 最好:使用 API 批量操作 ==========
def query_attributes_best(obj, indices):
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add(obj)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 批量获取所有位置
all_positions = mesh_fn.getPoints(om.MSpace.kWorld)
# 只返回需要的索引
return [all_positions[i] for i in indices]11.4 避免深层嵌套的循环
import maya.cmds as cmds
# ========== 不推荐:O(n²) 复杂度 ==========
def find_closest_objects_slow(objects):
"""找到每个物体最近的其他物体"""
closest = {}
for obj1 in objects:
pos1 = cmds.xform(obj1, q=True, ws=True, t=True)
min_distance = float('inf')
closest_obj = None
for obj2 in objects:
if obj1 == obj2:
continue
pos2 = cmds.xform(obj2, q=True, ws=True, t=True)
# 计算距离
distance = sum((a - b) ** 2 for a, b in zip(pos1, pos2)) ** 0.5
if distance < min_distance:
min_distance = distance
closest_obj = obj2
closest[obj1] = closest_obj
return closest
# ========== 推荐:预计算位置,使用更好的算法 ==========
def find_closest_objects_fast(objects):
"""使用预计算和优化的数据结构"""
import numpy as np
from scipy.spatial import cKDTree
# 预计算所有位置
positions = []
for obj in objects:
pos = cmds.xform(obj, q=True, ws=True, t=True)
positions.append(pos)
# 使用 KD-Tree 进行快速最近邻搜索
positions_np = np.array(positions)
tree = cKDTree(positions_np)
closest = {}
for i, obj in enumerate(objects):
# 找到最近的 2 个点(第一个是自己,第二个是最近的其他点)
distances, indices = tree.query(positions_np[i], k=2)
closest_idx = indices[1]
closest[obj] = objects[closest_idx]
return closest12. 内存优化
12.1 及时释放大型数据结构
import maya.api.OpenMaya as om
import maya.cmds as cmds
def process_large_mesh_bad(mesh_name):
"""不释放内存的例子"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 获取大量数据
positions = mesh_fn.getPoints(om.MSpace.kWorld)
normals = mesh_fn.getVertexNormals(False, om.MSpace.kWorld)
# 处理数据...
result = len(positions)
# positions 和 normals 占用的内存会一直保留到函数结束
return result
def process_large_mesh_good(mesh_name):
"""主动释放内存"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 获取数据
positions = mesh_fn.getPoints(om.MSpace.kWorld)
# 处理数据
result = len(positions)
# 主动删除不再需要的大型对象
del positions
return result12.2 使用生成器而非列表
import maya.cmds as cmds
# ========== 占用更多内存:返回完整列表 ==========
def get_all_transforms():
"""一次性返回所有 transforms"""
return cmds.ls(type='transform')
# ========== 节省内存:使用生成器 ==========
def iter_transforms():
"""逐个生成 transforms"""
all_transforms = cmds.ls(type='transform')
for transform in all_transforms:
yield transform
# 使用列表(一次性加载所有到内存)
all_trans = get_all_transforms()
for trans in all_trans:
# 处理...
pass
# 使用生成器(逐个处理,节省内存)
for trans in iter_transforms():
# 处理...
pass12.3 分批处理大型数据集
import maya.cmds as cmds
def process_many_objects_batched(objects, batch_size=100):
"""
分批处理大量对象,避免一次性占用过多内存
"""
total = len(objects)
for i in range(0, total, batch_size):
batch = objects[i:i+batch_size]
# 处理这批对象
for obj in batch:
# ... 处理逻辑
pass
# 可选:在批次之间清理内存
import gc
gc.collect()
# 更新进度
progress = min(i + batch_size, total) / total * 100
print(f"进度: {progress:.1f}%")13. 实战案例:完整的优化示例
让我们通过一个完整的实战案例,展示如何应用多种优化技术。
案例:批量导入和摆放资产
需求: 在场景中导入 10,000 个资产并根据地形高度摆放
import maya.cmds as cmds
import maya.api.OpenMaya as om
import time
from contextlib import contextmanager
# ========== 辅助上下文管理器 ==========
@contextmanager
def optimized_context():
"""优化的上下文管理器"""
# 保存当前选择
prev_selection = cmds.ls(selection=True)
# 暂停刷新和撤销
cmds.refresh(suspend=True)
cmds.undoInfo(stateWithoutFlush=False)
try:
yield
finally:
# 恢复状态
cmds.undoInfo(stateWithoutFlush=True)
cmds.refresh(suspend=False)
cmds.refresh()
# 恢复选择
if prev_selection:
cmds.select(prev_selection)
# ========== 未优化版本(慢)==========
def place_assets_slow(asset_path, terrain_mesh, count=1000):
"""
未优化的版本
预计耗时:对于 1000 个资产可能需要 5-10 分钟
"""
print("开始导入资产(未优化)...")
start = time.time()
for i in range(count):
# 导入资产
imported = cmds.file(asset_path, i=True, returnNewNodes=True)
asset_transform = imported[0]
# 重命名
asset_name = cmds.rename(asset_transform, f"asset_{i}")
# 随机位置
import random
x = random.uniform(-100, 100)
z = random.uniform(-100, 100)
# 查询地形高度(逐个查询,很慢)
pos = (x, 0, z)
cmds.xform(asset_name, translation=pos)
# 使用 closestPointOnSurface 查询高度
cpom = cmds.createNode('closestPointOnMesh')
cmds.connectAttr(f"{terrain_mesh}.worldMesh[0]", f"{cpom}.inMesh")
cmds.setAttr(f"{cpom}.inPosition", x, 0, z)
y = cmds.getAttr(f"{cpom}.position.positionY")
cmds.delete(cpom)
# 设置最终位置
cmds.xform(asset_name, translation=(x, y, z))
# 随机旋转
rotation_y = random.uniform(0, 360)
cmds.xform(asset_name, rotation=(0, rotation_y, 0))
elapsed = time.time() - start
print(f"完成!耗时: {elapsed:.2f} 秒")
# ========== 优化版本(快)==========
def place_assets_optimized(asset_path, terrain_mesh, count=1000):
"""
优化的版本
预计耗时:对于 1000 个资产可能只需要 10-30 秒
性能提升:20-60 倍
"""
import random
import numpy as np
print("开始导入资产(优化)...")
start = time.time()
# 1. 预处理地形数据(一次性获取)
sel = om.MSelectionList()
sel.add(terrain_mesh)
terrain_dag = sel.getDagPath(0)
terrain_fn = om.MFnMesh(terrain_dag)
# 2. 预生成所有随机位置
positions_2d = []
for i in range(count):
x = random.uniform(-100, 100)
z = random.uniform(-100, 100)
positions_2d.append((x, z))
# 3. 批量查询地形高度(使用 API 2.0)
def get_height_on_terrain(x, z):
"""使用 MFnMesh.closestPoint 查询高度"""
point = om.MPoint(x, 0, z)
closest_point = terrain_fn.closestPoint(point, om.MSpace.kWorld)[0]
return closest_point.y
heights = [get_height_on_terrain(x, z) for x, z in positions_2d]
# 4. 预生成所有旋转
rotations = [random.uniform(0, 360) for _ in range(count)]
# 5. 在优化上下文中导入和摆放资产
with optimized_context():
# 创建父组
main_group = cmds.createNode('transform', name='assets_group')
for i in range(count):
# 导入资产(使用 reference 更快)
namespace = f"asset_{i}"
ref_node = cmds.file(
asset_path,
reference=True,
namespace=namespace,
returnNewNodes=True
)
# 获取引用的顶层节点
ref_nodes = cmds.referenceQuery(ref_node[0], nodes=True)
transforms = cmds.ls(ref_nodes, type='transform', long=True)
top_node = None
for trans in transforms:
if not cmds.listRelatives(trans, parent=True):
top_node = trans
break
if not top_node:
continue
# 重命名(可选)
asset_name = cmds.rename(top_node, f"asset_{i}")
# 父级到主组
cmds.parent(asset_name, main_group)
# 批量设置变换(使用预计算的数据)
x, z = positions_2d[i]
y = heights[i]
rotation_y = rotations[i]
# 一次性设置所有变换属性
cmds.xform(
asset_name,
translation=(x, y, z),
rotation=(0, rotation_y, 0),
worldSpace=True
)
# 显示进度(每100个输出一次,避免过多IO)
if (i + 1) % 100 == 0:
print(f"已处理: {i+1}/{count}")
elapsed = time.time() - start
print(f"完成!耗时: {elapsed:.2f} 秒")
print(f"平均每个资产: {elapsed/count*1000:.2f} 毫秒")
# ========== 使用示例 ==========
def demo_asset_placement():
"""演示资产摆放"""
# 创建一个测试地形
terrain = cmds.polyPlane(name='terrain', width=200, height=200, sx=50, sy=50)[0]
# 创建一个简单的测试资产
test_asset = cmds.polyCube(name='test_asset')[0]
test_asset_path = cmds.file(rename='/tmp/test_asset.ma')
cmds.file(save=True, type='mayaAscii')
cmds.file(new=True, force=True)
# 重新创建地形
terrain = cmds.polyPlane(name='terrain', width=200, height=200, sx=50, sy=50)[0]
# 测试未优化版本(少量资产)
place_assets_slow(test_asset_path, terrain, count=10)
# 清理
cmds.file(new=True, force=True)
terrain = cmds.polyPlane(name='terrain', width=200, height=200, sx=50, sy=50)[0]
# 测试优化版本(大量资产)
place_assets_optimized(test_asset_path, terrain, count=1000)
优化要点总结:
- 批量操作:一次性获取地形数据,而不是每次都查询
- 预计算:提前生成所有随机数和位置
- 上下文管理:使用优化的上下文管理器暂停刷新和撤销
- API 选择:使用 API 2.0 进行几何查询
- 减少节点创建:避免重复创建临时节点(closestPointOnMesh)
- 批量设置:一次性设置多个变换属性
14. 字符串操作优化
14.1 避免重复的对象名称解析
import maya.cmds as cmds
# ========== 不推荐:每次都解析完整路径 ==========
def modify_hierarchy_slow(root):
"""每次都重新构建路径字符串"""
children = cmds.listRelatives(root, children=True, fullPath=True) or []
for child in children:
# 每次都用字符串拼接
cmds.setAttr(f"{child}.visibility", 1)
grandchildren = cmds.listRelatives(child, children=True, fullPath=True) or []
for grandchild in grandchildren:
cmds.setAttr(f"{grandchild}.visibility", 1)
# ========== 推荐:使用长名称,避免歧义 ==========
def modify_hierarchy_fast(root):
"""使用 fullPath 避免歧义和重复查询"""
# 一次性获取所有后代
descendants = cmds.listRelatives(root, allDescendents=True, fullPath=True) or []
# 批量操作
for node in descendants:
cmds.setAttr(f"{node}.visibility", 1)14.2 字符串格式化性能对比
import timeit
obj_name = "pCube1"
attr_name = "translateX"
# 测试不同的字符串格式化方法
def test_concatenation():
return obj_name + "." + attr_name
def test_format():
return "{}.{}".format(obj_name, attr_name)
def test_fstring():
return f"{obj_name}.{attr_name}"
def test_percent():
return "%s.%s" % (obj_name, attr_name)
# 性能测试
iterations = 100000
print("字符串格式化性能对比(100,000 次迭代):")
print(f"+ 拼接: {timeit.timeit(test_concatenation, number=iterations):.4f}秒")
print(f"format(): {timeit.timeit(test_format, number=iterations):.4f}秒")
print(f"f-string: {timeit.timeit(test_fstring, number=iterations):.4f}秒")
print(f"% 格式化: {timeit.timeit(test_percent, number=iterations):.4f}秒")
# 结果(仅供参考):
# + 拼接: 0.0156秒
# format(): 0.0312秒
# f-string: 0.0189秒(Python 3.6+,推荐使用)
# % 格式化: 0.0198秒15. DG(依赖图)和 DAG(有向无环图)优化
15.1 高效遍历 DAG
import maya.api.OpenMaya as om
import maya.cmds as cmds
# ========== 使用 cmds(较慢)==========
def traverse_hierarchy_cmds(root):
"""使用 cmds 遍历层次结构"""
all_descendants = []
def recurse(node):
children = cmds.listRelatives(node, children=True, fullPath=True) or []
for child in children:
all_descendants.append(child)
recurse(child)
recurse(root)
return all_descendants
# ========== 使用 API 2.0(快)==========
def traverse_hierarchy_api(root):
"""使用 API 2.0 的 DAG 迭代器"""
all_descendants = []
sel = om.MSelectionList()
sel.add(root)
root_dag = sel.getDagPath(0)
# 使用 DAG 迭代器
dag_iter = om.MItDag()
dag_iter.reset(root_dag, om.MItDag.kDepthFirst)
while not dag_iter.isDone():
current_dag = dag_iter.getPath()
all_descendants.append(current_dag.fullPathName())
dag_iter.next()
return all_descendants
# ========== 更快:使用 cmds.listRelatives 的 allDescendents 标志 ==========
def traverse_hierarchy_fastest(root):
"""最简单且高效的方法"""
return cmds.listRelatives(root, allDescendents=True, fullPath=True) or []15.2 过滤特定类型的节点
import maya.api.OpenMaya as om
import maya.cmds as cmds
# ========== 使用 cmds(中等速度)==========
def find_meshes_in_hierarchy_cmds(root):
"""使用 cmds 查找层次结构中的所有 mesh"""
descendants = cmds.listRelatives(root, allDescendents=True, fullPath=True) or []
meshes = []
for node in descendants:
shapes = cmds.listRelatives(node, shapes=True, fullPath=True) or []
for shape in shapes:
if cmds.nodeType(shape) == 'mesh':
meshes.append(shape)
return meshes
# ========== 使用 API 2.0(更快)==========
def find_meshes_in_hierarchy_api(root):
"""使用 API 2.0 迭代器过滤 mesh"""
meshes = []
sel = om.MSelectionList()
sel.add(root)
root_dag = sel.getDagPath(0)
# 使用带类型过滤的迭代器
dag_iter = om.MItDag(om.MItDag.kDepthFirst, om.MFn.kMesh)
dag_iter.reset(root_dag)
while not dag_iter.isDone():
dag_path = dag_iter.getPath()
meshes.append(dag_path.fullPathName())
dag_iter.next()
return meshes
# ========== 最简单的方法 ==========
def find_meshes_in_hierarchy_simple(root):
"""使用 cmds.listRelatives 的 type 标志"""
return cmds.listRelatives(root, allDescendents=True, type='mesh', fullPath=True) or []16. 脚本作业(Script Jobs)优化
Script Jobs 如果使用不当,会严重影响 Maya 的整体性能。
16.1 避免创建过多的 Script Jobs
import maya.cmds as cmds
# ========== 不推荐:为每个对象创建 script job ==========
def setup_monitors_bad(objects):
"""为每个对象创建监视器(糟糕的做法)"""
job_ids = []
for obj in objects:
# 每个对象一个 script job
job_id = cmds.scriptJob(
attributeChange=[f"{obj}.translateX", lambda o=obj: on_transform_changed(o)]
)
job_ids.append(job_id)
return job_ids
# ========== 推荐:使用单个 script job 监视场景变化 ==========
def setup_monitor_good(objects):
"""使用单个 script job 监视所有对象"""
# 将需要监视的对象存储在集合中
global monitored_objects
monitored_objects = set(objects)
# 创建单个 script job
job_id = cmds.scriptJob(
event=["timeChanged", on_scene_changed]
)
return job_id
def on_scene_changed():
"""统一的场景变化处理函数"""
global monitored_objects
for obj in monitored_objects:
if cmds.objExists(obj):
# 处理变化
pass16.2 及时清理 Script Jobs
import maya.cmds as cmds
class ScriptJobManager:
"""Script Job 管理器"""
def __init__(self):
self.job_ids = []
def add_job(self, *args, **kwargs):
"""添加 script job"""
job_id = cmds.scriptJob(*args, **kwargs)
self.job_ids.append(job_id)
return job_id
def clear_all(self):
"""清理所有 script jobs"""
for job_id in self.job_ids:
if cmds.scriptJob(exists=job_id):
cmds.scriptJob(kill=job_id, force=True)
self.job_ids.clear()
def __del__(self):
"""析构时自动清理"""
self.clear_all()
# 使用示例
job_manager = ScriptJobManager()
# 添加 jobs
job_manager.add_job(event=["SelectionChanged", on_selection_changed])
job_manager.add_job(event=["timeChanged", on_time_changed])
# 清理
job_manager.clear_all()16.3 优化 Script Job 回调函数
import maya.cmds as cmds
import time
# ========== 不推荐:耗时的回调函数 ==========
def slow_callback():
"""每次事件都执行耗时操作"""
# 这会严重拖慢 Maya
all_meshes = cmds.ls(type='mesh')
for mesh in all_meshes:
cmds.polyEvaluate(mesh, vertex=True)
# ========== 推荐:使用节流(throttle)==========
class ThrottledCallback:
"""节流回调类"""
def __init__(self, func, min_interval=0.1):
self.func = func
self.min_interval = min_interval
self.last_call_time = 0
def __call__(self, *args, **kwargs):
current_time = time.time()
if current_time - self.last_call_time >= self.min_interval:
self.last_call_time = current_time
return self.func(*args, **kwargs)
# 使用示例
def expensive_operation():
"""耗时操作"""
print("执行耗时操作")
# ... 耗时代码
# 创建节流版本(最多每100ms执行一次)
throttled_op = ThrottledCallback(expensive_operation, min_interval=0.1)
# 注册 script job
job_id = cmds.scriptJob(event=["timeChanged", throttled_op])17. Maya 命令插件(MPxCommand)优化
如果需要极致的性能,可以考虑使用 C++ 编写 Maya 插件。但即使使用 Python,也可以通过继承 MPxCommand 来获得更好的性能。
17.1 创建自定义命令
import sys
import maya.api.OpenMaya as om
def maya_useNewAPI():
"""告诉 Maya 使用 API 2.0"""
pass
class OptimizedTransformCommand(om.MPxCommand):
"""优化的变换命令"""
kPluginCmdName = "optimizedTransform"
def __init__(self):
om.MPxCommand.__init__(self)
@staticmethod
def cmdCreator():
return OptimizedTransformCommand()
def doIt(self, args):
"""命令执行"""
# 解析参数
arg_data = om.MArgParser(self.syntax(), args)
# 获取选择
selection = om.MGlobal.getActiveSelectionList()
if selection.isEmpty():
om.MGlobal.displayWarning("请选择至少一个对象")
return
# 批量处理所有选择的对象
translation = om.MVector(
arg_data.flagArgumentDouble('tx', 0) if arg_data.isFlagSet('tx') else 0,
arg_data.flagArgumentDouble('ty', 0) if arg_data.isFlagSet('ty') else 0,
arg_data.flagArgumentDouble('tz', 0) if arg_data.isFlagSet('tz') else 0
)
for i in range(selection.length()):
dag_path = selection.getDagPath(i)
transform_fn = om.MFnTransform(dag_path)
# 使用 API 直接操作
current_trans = transform_fn.translation(om.MSpace.kWorld)
new_trans = current_trans + translation
transform_fn.setTranslation(new_trans, om.MSpace.kWorld)
self.setResult("成功处理 {} 个对象".format(selection.length()))
@staticmethod
def syntaxCreator():
"""定义命令语法"""
syntax = om.MSyntax()
syntax.addFlag('tx', 'translateX', om.MSyntax.kDouble)
syntax.addFlag('ty', 'translateY', om.MSyntax.kDouble)
syntax.addFlag('tz', 'translateZ', om.MSyntax.kDouble)
return syntax
# 插件初始化和卸载
def initializePlugin(plugin):
plugin_fn = om.MFnPlugin(plugin)
try:
plugin_fn.registerCommand(
OptimizedTransformCommand.kPluginCmdName,
OptimizedTransformCommand.cmdCreator,
OptimizedTransformCommand.syntaxCreator
)
except:
sys.stderr.write(f"注册命令失败: {OptimizedTransformCommand.kPluginCmdName}")
raise
def uninitializePlugin(plugin):
plugin_fn = om.MFnPlugin(plugin)
try:
plugin_fn.deregisterCommand(OptimizedTransformCommand.kPluginCmdName)
except:
sys.stderr.write(f"注销命令失败: {OptimizedTransformCommand.kPluginCmdName}")
raise17.2 使用插件
import maya.cmds as cmds
# 加载插件
cmds.loadPlugin('/path/to/optimizedTransformPlugin.py')
# 使用自定义命令
cmds.select('pCube*')
cmds.optimizedTransform(tx=5, ty=10, tz=0)
# 卸载插件
cmds.unloadPlugin('optimizedTransformPlugin.py')18. 使用 C++ 扩展提升性能
对于极端性能要求,可以使用 C++ 编写 Maya 插件或使用 Cython。
18.1 使用 Cython 加速 Python 代码
# 文件: fast_math.pyx
import numpy as np
cimport numpy as np
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
def fast_normalize_weights(double[:, :] weights):
"""
使用 Cython 加速权重归一化
"""
cdef int n_verts = weights.shape[0]
cdef int n_influences = weights.shape[1]
cdef int i, j
cdef double row_sum
cdef np.ndarray[np.float64_t, ndim=2] result = np.empty((n_verts, n_influences))
for i in range(n_verts):
row_sum = 0.0
for j in range(n_influences):
row_sum += weights[i, j]
if row_sum > 0:
for j in range(n_influences):
result[i, j] = weights[i, j] / row_sum
else:
for j in range(n_influences):
result[i, j] = 0.0
return result18.2 编译和使用 Cython 模块
# setup.py
from distutils.core import setup
from Cython.Build import cythonize
import numpy
setup(
ext_modules=cythonize("fast_math.pyx"),
include_dirs=[numpy.get_include()]
)
# 编译命令:
# mayapy setup.py build_ext --inplace
# 在 Maya 中使用:
import fast_math
import numpy as np
weights = np.random.rand(10000, 5)
normalized = fast_math.fast_normalize_weights(weights)19. 最佳实践清单
19.1 性能优化检查清单
API 选择:
- 对于性能关键代码,使用
maya.api.OpenMaya(API 2.0) - 对于简单 UI 操作,使用
maya.cmds即可 - 避免使用 PyMEL 进行密集操作
批处理:
- 使用
cmds.refresh(suspend=True)暂停视口刷新 - 使用撤销上下文管理器
- 批量获取/设置属性而非逐个操作
- 使用批量查询减少 Maya 命令调用
数据结构:
- 使用集合(set)而非列表(list)进行成员检查
- 使用字典(dict)进行快速查找
- 考虑使用 NumPy 进行数值计算
- 使用生成器处理大型数据集
查询优化:
- 缓存重复查询的结果
- 使用
@lru_cache装饰器缓存函数结果 - 使用精确的查询标志
- 使用长名称(fullPath)避免歧义
算法优化:
- 避免 O(n²) 或更高复杂度的算法
- 使用合适的数据结构(KD-Tree, 哈希表等)
- 预计算可重用的数据
- 考虑空间换时间的策略
内存管理:
- 及时释放大型数据结构
- 使用生成器而非列表
- 分批处理大型数据集
- 避免内存泄漏(特别是 script jobs)
性能分析:
- 使用
cProfile找到性能瓶颈 - 使用计时器测量关键代码段
- 使用 Maya Profiler 分析整体性能
- 先测量,再优化
19.2 代码审查清单
# ========== 性能代码审查模板 ==========
"""
函数名: process_large_dataset()
预期用途: 处理 10,000+ 顶点的 mesh
性能审查:
1. API 选择:✓ 使用了 maya.api.OpenMaya
2. 批处理:✓ 使用了上下文管理器
3. 循环优化:✓ 避免了嵌套循环
4. 缓存:✓ 缓存了查询结果
5. 内存:✓ 及时释放了大型数组
测试结果:
- 10,000 顶点: 0.5 秒
- 100,000 顶点: 3.2 秒
- 内存占用: < 100MB
批准人: [名字]
日期: 2024-01-01
"""20. 性能基准测试
20.1 创建性能测试套件
import maya.cmds as cmds
import maya.api.OpenMaya as om
import time
import json
class PerformanceBenchmark:
"""性能基准测试类"""
def __init__(self, name="性能测试"):
self.name = name
self.results = {}
def test(self, test_name, func, *args, **kwargs):
"""执行单个测试"""
print(f"\n运行测试: {test_name}")
# 预热(避免首次调用的开销)
func(*args, **kwargs)
# 正式测试
iterations = kwargs.pop('iterations', 1)
times = []
for i in range(iterations):
start = time.time()
result = func(*args, **kwargs)
elapsed = time.time() - start
times.append(elapsed)
# 统计
avg_time = sum(times) / len(times)
min_time = min(times)
max_time = max(times)
self.results[test_name] = {
'average': avg_time,
'min': min_time,
'max': max_time,
'iterations': iterations
}
print(f" 平均: {avg_time:.4f}秒")
print(f" 最快: {min_time:.4f}秒")
print(f" 最慢: {max_time:.4f}秒")
return result
def compare(self, baseline_name, comparison_name):
"""比较两个测试的性能"""
if baseline_name not in self.results or comparison_name not in self.results:
print("测试不存在")
return
baseline = self.results[baseline_name]['average']
comparison = self.results[comparison_name]['average']
speedup = baseline / comparison
percent_improvement = ((baseline - comparison) / baseline) * 100
print(f"\n性能对比:")
print(f" {baseline_name}: {baseline:.4f}秒")
print(f" {comparison_name}: {comparison:.4f}秒")
print(f" 加速比: {speedup:.2f}x")
print(f" 性能提升: {percent_improvement:.1f}%")
def save_results(self, filepath):
"""保存结果到 JSON 文件"""
with open(filepath, 'w') as f:
json.dump(self.results, f, indent=2)
print(f"\n结果已保存到: {filepath}")
def print_summary(self):
"""打印摘要"""
print(f"\n{'='*50}")
print(f"{self.name} - 测试摘要")
print(f"{'='*50}")
for test_name, result in self.results.items():
print(f"\n{test_name}:")
print(f" 平均时间: {result['average']:.4f}秒")
print(f" 迭代次数: {result['iterations']}")
# ========== 使用示例 ==========
def run_benchmarks():
"""运行完整的性能测试套件"""
# 创建测试场景
cmds.file(new=True, force=True)
mesh = cmds.polySphere(subdivisionsX=100, subdivisionsY=100, radius=5)[0]
benchmark = PerformanceBenchmark("Maya Python 性能测试")
# 测试 1: cmds vs API
def test_cmds():
positions = []
vert_count = cmds.polyEvaluate(mesh, vertex=True)
for i in range(vert_count):
pos = cmds.xform(f"{mesh}.vtx[{i}]", q=True, ws=True, t=True)
positions.append(pos)
return positions
def test_api():
sel = om.MSelectionList()
sel.add(mesh)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
return mesh_fn.getPoints(om.MSpace.kWorld)
benchmark.test("cmds_逐顶点查询", test_cmds, iterations=3)
benchmark.test("API_批量查询", test_api, iterations=3)
benchmark.compare("cmds_逐顶点查询", "API_批量查询")
# 打印摘要
benchmark.print_summary()
# 保存结果
benchmark.save_results("/tmp/maya_performance_benchmark.json")
# 运行基准测试
# run_benchmarks()21. 常见性能问题及解决方案
21.1 问题:脚本运行缓慢
症状: 脚本需要很长时间才能完成
诊断步骤:
import cProfile
import maya.cmds as cmds
def diagnose_slow_script():
"""诊断缓慢的脚本"""
profiler = cProfile.Profile()
profiler.enable()
# 运行你的慢脚本
your_slow_function()
profiler.disable()
profiler.print_stats(sort='cumulative')常见原因和解决方案:
- 循环中的重复查询
# 问题
for obj in objects:
if cmds.objExists(obj): # 每次都查询
...
# 解决
objects = [obj for obj in objects if cmds.objExists(obj)] # 一次过滤
for obj in objects:
...- 未使用批量操作
# 问题
for vtx_id in range(10000):
pos = cmds.xform(f"mesh.vtx[{vtx_id}]", q=True, t=True)
# 解决
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add("mesh")
mesh_fn = om.MFnMesh(sel.getDagPath(0))
positions = mesh_fn.getPoints()- 未暂停视口刷新
# 问题
for i in range(1000):
cmds.polyCube()
# 解决
cmds.refresh(suspend=True)
try:
for i in range(1000):
cmds.polyCube()
finally:
cmds.refresh(suspend=False)21.2 问题:Maya 界面卡顿
症状: Maya 界面响应缓慢,鼠标点击延迟
诊断步骤:
import maya.cmds as cmds
def check_script_jobs():
"""检查是否有过多的 script jobs"""
all_jobs = cmds.scriptJob(listJobs=True)
print(f"当前 Script Jobs 数量: {len(all_jobs)}")
if len(all_jobs) > 50:
print("警告: Script Jobs 过多,可能影响性能")
# 按类型统计
job_types = {}
for job in all_jobs:
job_type = job.split(':')[0].strip()
job_types[job_type] = job_types.get(job_type, 0) + 1
print("\nScript Jobs 类型统计:")
for job_type, count in sorted(job_types.items(), key=lambda x: x[1], reverse=True):
print(f" {job_type}: {count}")
def check_callback_nodes():
"""检查是否有过多的回调节点"""
script_nodes = cmds.ls(type='script')
expression_nodes = cmds.ls(type='expression')
print(f"\nScript 节点数量: {len(script_nodes)}")
print(f"Expression 节点数量: {len(expression_nodes)}")
if len(expression_nodes) > 100:
print("警告: Expression 节点过多,可能影响性能")
# 运行诊断
check_script_jobs()
check_callback_nodes()解决方案:
- 清理不必要的 Script Jobs
import maya.cmds as cmds
def cleanup_script_jobs():
"""清理所有用户创建的 script jobs"""
all_jobs = cmds.scriptJob(listJobs=True)
for job in all_jobs:
# 提取 job ID
job_id = int(job.split(':')[0])
# 尝试删除(系统 jobs 无法删除)
try:
cmds.scriptJob(kill=job_id, force=True)
print(f"已删除 job: {job_id}")
except:
pass
# 谨慎使用!这会删除所有 script jobs
# cleanup_script_jobs()- 优化回调函数
# 问题:回调函数执行耗时操作
def heavy_callback():
# 每次选择变化都执行,太频繁
all_meshes = cmds.ls(type='mesh')
for mesh in all_meshes:
cmds.polyEvaluate(mesh)
cmds.scriptJob(event=["SelectionChanged", heavy_callback])
# 解决:使用防抖(debounce)
import maya.utils as utils
class DebouncedCallback:
"""防抖回调类"""
def __init__(self, func, delay=0.5):
self.func = func
self.delay = delay
self.timer_id = None
def __call__(self, *args, **kwargs):
# 取消之前的定时器
if self.timer_id:
utils.executeDeferred(f'import maya.cmds as cmds; cmds.evalDeferred("", flush={self.timer_id})')
# 创建新的定时器
def delayed_execution():
self.func(*args, **kwargs)
self.timer_id = cmds.evalDeferred(delayed_execution)
# 使用防抖版本
debounced_callback = DebouncedCallback(heavy_callback, delay=1.0)
cmds.scriptJob(event=["SelectionChanged", debounced_callback])21.3 问题:内存占用过高
症状: Maya 内存使用持续增长,最终崩溃
诊断步骤:
import sys
import maya.cmds as cmds
def check_memory_usage():
"""检查内存使用情况"""
try:
import psutil
import os
process = psutil.Process(os.getpid())
memory_info = process.memory_info()
print(f"内存使用情况:")
print(f" 物理内存: {memory_info.rss / 1024 / 1024:.2f} MB")
print(f" 虚拟内存: {memory_info.vms / 1024 / 1024:.2f} MB")
except ImportError:
print("请安装 psutil: mayapy -m pip install psutil")
def find_large_objects():
"""找到占用内存较大的 Python 对象"""
import gc
import sys
gc.collect()
objects = gc.get_objects()
print(f"Python 对象总数: {len(objects)}")
# 找到最大的对象
large_objects = []
for obj in objects:
try:
size = sys.getsizeof(obj)
if size > 1024 * 1024: # 大于 1MB
large_objects.append((type(obj).__name__, size))
except:
pass
# 排序并打印
large_objects.sort(key=lambda x: x[1], reverse=True)
print("\n占用内存较大的对象 (>1MB):")
for obj_type, size in large_objects[:10]:
print(f" {obj_type}: {size / 1024 / 1024:.2f} MB")
check_memory_usage()
find_large_objects()解决方案:
- 清理引用和循环引用
import maya.cmds as cmds
import weakref
class MeshDataCache:
"""使用弱引用避免内存泄漏"""
def __init__(self):
self._cache = {}
def add_mesh_data(self, mesh_name, data):
"""添加 mesh 数据"""
# 使用弱引用,当对象被删除时自动清理
self._cache[mesh_name] = weakref.ref(data)
def get_mesh_data(self, mesh_name):
"""获取 mesh 数据"""
if mesh_name in self._cache:
data = self._cache[mesh_name]()
if data is not None:
return data
else:
# 对象已被删除,清理缓存
del self._cache[mesh_name]
return None
def clear(self):
"""清理缓存"""
self._cache.clear()- 定期执行垃圾回收
import gc
import maya.cmds as cmds
def perform_deep_cleanup():
"""执行深度清理"""
print("执行内存清理...")
# 强制垃圾回收
collected = gc.collect()
print(f"回收了 {collected} 个对象")
# 清理 Maya 的撤销队列
cmds.flushUndo()
print("已清理撤销队列")
# 刷新视口
cmds.refresh()
check_memory_usage()
# 在长时间运行的脚本中定期调用
def long_running_task():
for i in range(10000):
# ... 执行任务
# 每1000次迭代清理一次
if i % 1000 == 0:
gc.collect()
print(f"进度: {i}/10000, 执行垃圾回收")- 使用上下文管理器管理资源
from contextlib import contextmanager
import maya.api.OpenMaya as om
@contextmanager
def temp_mesh_data(mesh_name):
"""临时 mesh 数据的上下文管理器"""
# 获取数据
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
positions = mesh_fn.getPoints(om.MSpace.kWorld)
normals = mesh_fn.getVertexNormals(False, om.MSpace.kWorld)
try:
yield positions, normals
finally:
# 显式删除大型对象
del positions
del normals
import gc
gc.collect()
# 使用方式
with temp_mesh_data("pSphere1") as (positions, normals):
# 使用数据
print(f"顶点数: {len(positions)}")
# 退出上下文后自动清理21.4 问题:脚本在特定场景下很慢
症状: 脚本在某些场景下运行正常,在其他场景下非常慢
诊断:
import maya.cmds as cmds
import time
def analyze_scene_complexity():
"""分析场景复杂度"""
print("=" * 50)
print("场景复杂度分析")
print("=" * 50)
# 节点统计
all_nodes = cmds.ls()
print(f"\n节点总数: {len(all_nodes)}")
# 按类型统计
node_types = {}
for node in all_nodes:
node_type = cmds.nodeType(node)
node_types[node_type] = node_types.get(node_type, 0) + 1
print("\n主要节点类型:")
for node_type, count in sorted(node_types.items(), key=lambda x: x[1], reverse=True)[:10]:
print(f" {node_type}: {count}")
# Mesh 复杂度
meshes = cmds.ls(type='mesh')
total_verts = 0
total_faces = 0
print(f"\nMesh 数量: {len(meshes)}")
if meshes:
large_meshes = []
for mesh in meshes:
try:
vert_count = cmds.polyEvaluate(mesh, vertex=True)
face_count = cmds.polyEvaluate(mesh, face=True)
total_verts += vert_count
total_faces += face_count
if vert_count > 10000:
parent = cmds.listRelatives(mesh, parent=True, fullPath=True)
large_meshes.append((parent[0] if parent else mesh, vert_count))
except:
pass
print(f"总顶点数: {total_verts:,}")
print(f"总面数: {total_faces:,}")
if large_meshes:
print("\n大型 Mesh (>10,000 顶点):")
for mesh, vert_count in sorted(large_meshes, key=lambda x: x[1], reverse=True)[:5]:
print(f" {mesh}: {vert_count:,} 顶点")
# 变形器统计
deformers = cmds.ls(type='geometryFilter')
print(f"\n变形器数量: {len(deformers)}")
if len(deformers) > 100:
print("警告: 变形器数量过多,可能影响性能")
# 贴图统计
file_nodes = cmds.ls(type='file')
print(f"贴图节点数量: {len(file_nodes)}")
# 引用统计
references = cmds.ls(references=True)
print(f"引用数量: {len(references)}")
print("\n" + "=" * 50)
# 运行分析
analyze_scene_complexity()解决方案:
根据场景复杂度调整策略:
import maya.cmds as cmds
class AdaptiveProcessor:
"""自适应处理器 - 根据场景复杂度调整策略"""
def __init__(self):
self.scene_complexity = self._analyze_complexity()
def _analyze_complexity(self):
"""分析场景复杂度"""
meshes = cmds.ls(type='mesh')
total_verts = 0
for mesh in meshes:
try:
total_verts += cmds.polyEvaluate(mesh, vertex=True)
except:
pass
if total_verts < 10000:
return "low"
elif total_verts < 100000:
return "medium"
else:
return "high"
def process_meshes(self, meshes):
"""根据复杂度选择处理策略"""
print(f"场景复杂度: {self.scene_complexity}")
if self.scene_complexity == "low":
# 简单场景,可以使用 cmds
return self._process_with_cmds(meshes)
else:
# 复杂场景,必须使用 API
return self._process_with_api(meshes)
def _process_with_cmds(self, meshes):
"""使用 cmds 处理(简单场景)"""
print("使用 cmds 处理...")
results = []
for mesh in meshes:
vert_count = cmds.polyEvaluate(mesh, vertex=True)
results.append(vert_count)
return results
def _process_with_api(self, meshes):
"""使用 API 处理(复杂场景)"""
print("使用 API 处理...")
import maya.api.OpenMaya as om
results = []
for mesh in meshes:
sel = om.MSelectionList()
sel.add(mesh)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
results.append(mesh_fn.numVertices)
return results
# 使用示例
processor = AdaptiveProcessor()
meshes = cmds.ls(type='mesh')
results = processor.process_meshes(meshes)22. 高级优化技术
22.1 使用 Maya 的批量命令
某些 Maya 命令支持一次处理多个对象:
import maya.cmds as cmds
# ========== 不推荐:逐个删除 ==========
def delete_objects_slow(objects):
for obj in objects:
cmds.delete(obj)
# ========== 推荐:批量删除 ==========
def delete_objects_fast(objects):
if objects:
cmds.delete(objects)
# ========== 不推荐:逐个选择 ==========
def select_objects_slow(objects):
cmds.select(clear=True)
for obj in objects:
cmds.select(obj, add=True)
# ========== 推荐:批量选择 ==========
def select_objects_fast(objects):
cmds.select(objects, replace=True)
# ========== 批量属性设置 ==========
def set_visibility_slow(objects, value):
for obj in objects:
cmds.setAttr(f"{obj}.visibility", value)
def set_visibility_fast(objects, value):
# 使用列表推导一次性构建所有属性名
for obj in objects:
cmds.setAttr(f"{obj}.visibility", value)
# 或者使用 MEL 的批量命令
if value:
cmds.showHidden(objects)
else:
cmds.hide(objects)22.2 使用 MPlug 进行高效属性操作
import maya.api.OpenMaya as om
import maya.cmds as cmds
# ========== 使用 cmds(较慢)==========
def get_attributes_cmds(obj, attrs):
"""使用 cmds 获取多个属性"""
results = {}
for attr in attrs:
results[attr] = cmds.getAttr(f"{obj}.{attr}")
return results
# ========== 使用 MPlug(更快)==========
def get_attributes_plug(obj, attrs):
"""使用 MPlug 获取多个属性"""
sel = om.MSelectionList()
sel.add(obj)
node = sel.getDependNode(0)
dep_fn = om.MFnDependencyNode(node)
results = {}
for attr in attrs:
try:
plug = dep_fn.findPlug(attr, False)
# 根据类型获取值
if plug.isNumeric:
results[attr] = plug.asFloat()
elif plug.attribute().apiType() == om.MFn.kNumericAttribute:
results[attr] = plug.asDouble()
else:
results[attr] = plug.asString()
except:
results[attr] = None
return results
# 性能测试
import time
obj = "pCube1"
attrs = ["translateX", "translateY", "translateZ", "rotateX", "rotateY", "rotateZ"]
# 测试 cmds
start = time.time()
for _ in range(1000):
get_attributes_cmds(obj, attrs)
cmds_time = time.time() - start
# 测试 MPlug
start = time.time()
for _ in range(1000):
get_attributes_plug(obj, attrs)
plug_time = time.time() - start
print(f"cmds: {cmds_time:.4f}秒")
print(f"MPlug: {plug_time:.4f}秒")
print(f"加速比: {cmds_time/plug_time:.2f}x")22.3 使用 MDGModifier 进行批量 DG 操作
import maya.api.OpenMaya as om
import maya.cmds as cmds
# ========== 使用 cmds(较慢)==========
def create_nodes_cmds(count):
"""使用 cmds 创建节点"""
nodes = []
for i in range(count):
node = cmds.createNode('multiplyDivide', name=f'mult_{i}')
cmds.setAttr(f'{node}.input1X', i)
nodes.append(node)
return nodes
# ========== 使用 MDGModifier(更快)==========
def create_nodes_modifier(count):
"""使用 MDGModifier 批量创建节点"""
modifier = om.MDGModifier()
nodes = []
# 批量创建节点
for i in range(count):
node = modifier.createNode('multiplyDivide')
nodes.append(node)
# 一次性执行所有操作
modifier.doIt()
# 设置属性
for i, node in enumerate(nodes):
dep_fn = om.MFnDependencyNode(node)
plug = dep_fn.findPlug('input1X', False)
modifier.newPlugValueFloat(plug, float(i))
# 再次执行
modifier.doIt()
# 获取节点名称
node_names = []
for node in nodes:
dep_fn = om.MFnDependencyNode(node)
node_names.append(dep_fn.name())
return node_names
# 性能测试
import time
count = 1000
# 测试 cmds
start = time.time()
create_nodes_cmds(count)
cmds_time = time.time() - start
cmds.file(new=True, force=True)
# 测试 MDGModifier
start = time.time()
create_nodes_modifier(count)
modifier_time = time.time() - start
print(f"cmds: {cmds_time:.4f}秒")
print(f"MDGModifier: {modifier_time:.4f}秒")
print(f"加速比: {cmds_time/modifier_time:.2f}x")22.4 使用 MFnMesh 的批量编辑方法
import maya.api.OpenMaya as om
import maya.cmds as cmds
import random
def create_test_mesh():
"""创建测试 mesh"""
mesh = cmds.polyPlane(sx=100, sy=100, w=10, h=10)[0]
return mesh
# ========== 逐顶点修改(慢)==========
def modify_mesh_slow(mesh_name):
"""逐顶点修改位置"""
import maya.api.OpenMaya as om
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
positions = mesh_fn.getPoints(om.MSpace.kWorld)
# 逐个修改
for i in range(len(positions)):
pos = positions[i]
positions[i] = om.MPoint(pos.x, pos.y + random.uniform(-0.5, 0.5), pos.z)
# 逐个设置(非常慢)
single_point = om.MPointArray([positions[i]])
vertex_ids = om.MIntArray([i])
# 这种方式效率很低
mesh_fn.setPoints(positions, om.MSpace.kWorld)
# ========== 批量修改(快)==========
def modify_mesh_fast(mesh_name):
"""批量修改位置"""
import maya.api.OpenMaya as om
import random
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 批量获取
positions = mesh_fn.getPoints(om.MSpace.kWorld)
# 批量修改
new_positions = om.MPointArray()
for pos in positions:
new_pos = om.MPoint(pos.x, pos.y + random.uniform(-0.5, 0.5), pos.z)
new_positions.append(new_pos)
# 批量设置
mesh_fn.setPoints(new_positions, om.MSpace.kWorld)
# ========== 使用 NumPy(最快)==========
def modify_mesh_numpy(mesh_name):
"""使用 NumPy 批量修改"""
import maya.api.OpenMaya as om
import numpy as np
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
# 获取位置
positions = mesh_fn.getPoints(om.MSpace.kWorld)
# 转换为 NumPy 数组
pos_array = np.array([[p.x, p.y, p.z] for p in positions])
# 向量化操作
random_offsets = np.random.uniform(-0.5, 0.5, size=len(pos_array))
pos_array[:, 1] += random_offsets
# 转换回 MPointArray
new_positions = om.MPointArray([om.MPoint(p[0], p[1], p[2]) for p in pos_array])
# 批量设置
mesh_fn.setPoints(new_positions, om.MSpace.kWorld)
# 性能对比
import time
mesh = create_test_mesh()
# 测试批量修改
start = time.time()
modify_mesh_fast(mesh)
fast_time = time.time() - start
cmds.undo()
# 测试 NumPy
start = time.time()
modify_mesh_numpy(mesh)
numpy_time = time.time() - start
print(f"批量修改: {fast_time:.4f}秒")
print(f"NumPy: {numpy_time:.4f}秒")
print(f"NumPy 加速比: {fast_time/numpy_time:.2f}x")23. 实用工具函数库
将常用的优化技术封装成可重用的工具库:
"""
maya_performance_utils.py
Maya 性能优化工具库
"""
import maya.cmds as cmds
import maya.api.OpenMaya as om
import maya.api.OpenMayaAnim as oma
import time
import functools
from contextlib import contextmanager
# ========== 上下文管理器 ==========
@contextmanager
def performance_context(suspend_refresh=True, suspend_undo=True, restore_selection=True):
"""
性能优化上下文管理器
Usage:
with performance_context():
# 批量操作
for i in range(1000):
cmds.polyCube()
"""
prev_selection = cmds.ls(selection=True) if restore_selection else []
if suspend_refresh:
cmds.refresh(suspend=True)
if suspend_undo:
cmds.undoInfo(stateWithoutFlush=False)
try:
yield
finally:
if suspend_undo:
cmds.undoInfo(stateWithoutFlush=True)
if suspend_refresh:
cmds.refresh(suspend=False)
cmds.refresh()
if restore_selection and prev_selection:
cmds.select(prev_selection, replace=True)
@contextmanager
def timer_context(name="Operation"):
"""
计时上下文管理器
Usage:
with timer_context("创建立方体"):
cmds.polyCube()
"""
start = time.time()
yield
elapsed = time.time() - start
print(f"{name} 耗时: {elapsed:.4f} 秒")
# ========== 装饰器 ==========
def performance_timer(func):
"""
性能计时装饰器
Usage:
@performance_timer
def my_function():
# ...
"""
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
elapsed = time.time() - start
print(f"{func.__name__} 执行时间: {elapsed:.4f} 秒")
return result
return wrapper
def with_performance_context(func):
"""
自动应用性能上下文的装饰器
Usage:
@with_performance_context
def create_many_objects():
for i in range(1000):
cmds.polyCube()
"""
@functools.wraps(func)
def wrapper(*args, **kwargs):
with performance_context():
return func(*args, **kwargs)
return wrapper
# ========== Mesh 操作工具 ==========
class FastMeshOps:
"""快速 Mesh 操作类"""
@staticmethod
def get_vertex_positions(mesh_name, world_space=True):
"""批量获取顶点位置"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
space = om.MSpace.kWorld if world_space else om.MSpace.kObject
return mesh_fn.getPoints(space)
@staticmethod
def set_vertex_positions(mesh_name, positions, world_space=True):
"""批量设置顶点位置"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
space = om.MSpace.kWorld if world_space else om.MSpace.kObject
mesh_fn.setPoints(positions, space)
@staticmethod
def get_vertex_normals(mesh_name, world_space=True):
"""批量获取顶点法线"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
space = om.MSpace.kWorld if world_space else om.MSpace.kObject
return mesh_fn.getVertexNormals(False, space)
@staticmethod
def get_mesh_info(mesh_name):
"""获取 mesh 基本信息"""
sel = om.MSelectionList()
sel.add(mesh_name)
dag_path = sel.getDagPath(0)
mesh_fn = om.MFnMesh(dag_path)
return {
'vertices': mesh_fn.numVertices,
'faces': mesh_fn.numPolygons,
'edges': mesh_fn.numEdges,
'uvs': mesh_fn.numUVs
}
# ========== 蒙皮权重工具 ==========
class FastSkinOps:
"""快速蒙皮权重操作类"""
@staticmethod
def get_skin_weights(mesh_name, skin_cluster):
"""批量获取蒙皮权重"""
sel = om.MSelectionList()
sel.add(skin_cluster)
sel.add(mesh_name)
skin_obj = sel.getDependNode(0)
mesh_dag = sel.getDagPath(1)
skin_fn = oma.MFnSkinCluster(skin_obj)
mesh_fn = om.MFnMesh(mesh_dag)
# 创建组件
comp_fn = om.MFn.kMeshVertComponent)
single_comp = comp_fn.create(om.MFn.kMeshVertComponent)
comp_fn.addElements(range(mesh_fn.numVertices))
# 获取权重
weights, num_influences = skin_fn.getWeights(mesh_dag, single_comp)
return weights, num_influences
@staticmethod
def set_skin_weights(mesh_name, skin_cluster, weights):
"""批量设置蒙皮权重"""
sel = om.MSelectionList()
sel.add(skin_cluster)
sel.add(mesh_name)
skin_obj = sel.getDependNode(0)
mesh_dag = sel.getDagPath(1)
skin_fn = oma.MFnSkinCluster(skin_obj)
mesh_fn = om.MFnMesh(mesh_dag)
# 创建组件
comp_fn = om.MFnSingleIndexedComponent()
single_comp = comp_fn.create(om.MFn.kMeshVertComponent)
comp_fn.addElements(range(mesh_fn.numVertices))
# 设置权重
influence_indices = om.MIntArray(range(skin_fn.numInfluences()))
skin_fn.setWeights(mesh_dag, single_comp, influence_indices, weights)
# ========== 批量查询工具 ==========
class BatchQuery:
"""批量查询工具"""
@staticmethod
@functools.lru_cache(maxsize=256)
def node_exists(node_name):
"""缓存的节点存在检查"""
return cmds.objExists(node_name)
@staticmethod
def filter_existing_nodes(nodes):
"""过滤存在的节点"""
return [n for n in nodes if cmds.objExists(n)]
@staticmethod
def get_node_types(nodes):
"""批量获取节点类型"""
return {node: cmds.nodeType(node) for node in nodes if cmds.objExists(node)}
# 使用示例
if __name__ == "__main__":
# 示例1:使用性能上下文
with performance_context():
for i in range(100):
cmds.polyCube()
# 示例2:使用计时装饰器
@performance_timer
def create_spheres(count):
for i in range(count):
cmds.polySphere()
create_spheres(50)
# 示例3:快速 Mesh 操作
mesh = cmds.polySphere()[0]
positions = FastMeshOps.get_vertex_positions(mesh)
info = FastMeshOps.get_mesh_info(mesh)
print(f"Mesh 信息: {info}")
```python
om.MFn.kMeshVertComponent)
single_comp = comp_fn.create(om.MFn.kMeshVertComponent)
comp_fn.addElements(range(mesh_fn.numVertices))
# 获取权重
weights, num_influences = skin_fn.getWeights(mesh_dag, single_comp)
return weights, num_influences
@staticmethod
def set_skin_weights(mesh_name, skin_cluster, weights):
"""批量设置蒙皮权重"""
sel = om.MSelectionList()
sel.add(skin_cluster)
sel.add(mesh_name)
skin_obj = sel.getDependNode(0)
mesh_dag = sel.getDagPath(1)
skin_fn = oma.MFnSkinCluster(skin_obj)
mesh_fn = om.MFnMesh(mesh_dag)
# 创建组件
comp_fn = om.MFnSingleIndexedComponent()
single_comp = comp_fn.create(om.MFn.kMeshVertComponent)
comp_fn.addElements(range(mesh_fn.numVertices))
# 设置权重
influence_indices = om.MIntArray(range(skin_fn.numInfluences()))
skin_fn.setWeights(mesh_dag, single_comp, influence_indices, weights)
# ========== 批量查询工具 ==========
class BatchQuery:
"""批量查询工具"""
@staticmethod
@functools.lru_cache(maxsize=256)
def node_exists(node_name):
"""缓存的节点存在检查"""
return cmds.objExists(node_name)
@staticmethod
def filter_existing_nodes(nodes):
"""过滤存在的节点"""
return [n for n in nodes if cmds.objExists(n)]
@staticmethod
def get_node_types(nodes):
"""批量获取节点类型"""
return {node: cmds.nodeType(node) for node in nodes if cmds.objExists(node)}
# 使用示例
if __name__ == "__main__":
# 示例1:使用性能上下文
with performance_context():
for i in range(100):
cmds.polyCube()
# 示例2:使用计时装饰器
@performance_timer
def create_spheres(count):
for i in range(count):
cmds.polySphere()
create_spheres(50)
# 示例3:快速 Mesh 操作
mesh = cmds.polySphere()[0]
positions = FastMeshOps.get_vertex_positions(mesh)
info = FastMeshOps.get_mesh_info(mesh)
print(f"Mesh 信息: {info}")24. 性能优化总结
核心原则
- 先测量,后优化 - 使用 profiler 找到真正的瓶颈
- 选对工具 - API 2.0 > cmds > PyMEL(性能关键场景)
- 批量操作 - 一次处理多个对象,减少函数调用
- 缓存结果 - 避免重复计算相同的数据
- 上下文管理 - 暂停刷新和撤销
快速检查清单
必做项:
- ✅ 使用
maya.api.OpenMaya处理大量几何数据 - ✅ 批量获取/设置属性
- ✅ 使用性能上下文(暂停刷新/撤销)
- ✅ 缓存重复查询
- ✅ 使用合适的数据结构(set, dict)
避免项:
- ❌ 循环中逐个查询属性
- ❌ 不必要的字符串操作
- ❌ 过度使用 PyMEL
- ❌ 忘记清理 Script Jobs
- ❌ 深层嵌套循环
性能提升参考
| 优化方法 | 预期提升 |
|---|---|
| cmds → API 2.0 | 10-50x |
| 逐个操作 → 批量操作 | 5-20x |
| 添加缓存 | 2-10x |
| 暂停刷新 | 2-5x |
| 使用 NumPy | 3-15x |
25. 推荐资源
官方文档:
- Maya Python API 2.0 Reference
- Maya Commands Reference
学习资源:
- CGCircuit - Maya Python 课程
- Autodesk Knowledge Network
- Python Performance Tips (python.org)
工具:
- cProfile - Python 性能分析
- Maya Profiler - 内置性能分析器
- memory_profiler - 内存分析
结语
Maya Python 性能优化是一个持续的过程。关键是:
- 了解工具 - 知道何时使用 cmds、API 或其他工具
- 测量性能 - 用数据说话,不要猜测
- 渐进优化 - 先解决最大的瓶颈
- 保持代码可读性 - 不要过度优化
希望这份教程能帮助你编写更高效的 Maya Python 脚本!
记住:过早优化是万恶之源,但了解优化技术是必要的。 🚀