根据不同情况走不同的路(分支),和重复做一些事情直到满足条件

循环的关键是 知道起点、终点和步长

1.  for循环: 当你知道要循环多少次时,比如遍历数组。

for (let i = 0; i < 5; i++) {    console.log('这是第 ${i} 次循环');
}// 结构:初始化;条件;增量

2.  while循环: 当你不确定次数,但知道满足某个条件就要继续时

let stack = [1, 2, 3];while (stack.length > 0) { // 只要栈不为空就继续
    console.log(stack.pop());
}

3.  for...of (用于数组等可迭代对象) 和  for...in (用于对象属性): 让你摆脱索引,直接拿到值

// for...of 遍历数组值let colors = ['red', 'green', 'blue'];for (let color of colors) {    console.log(color); // 直接输出 'red', 'green', 'blue'}// for...in 遍历对象键名let obj = {a: 1, b: 2};for (let key in obj) {    console.log(key, obj[key]); // 输出 'a' 1, 'b' 2

// 1. 获取元素集合(这是一个HTMLCollection,类似数组) const oldParagraphs = document . getElementsByClassName ( 'old-style' ); // 2. 遍历并修改(经典for循环) for ( let i = 0 ; i < oldParagraphs. length ; i++) {    oldParagraphs[i]. style . color = '#999' ;    oldParagraphs[i]. style . textDecoration = 'line-through' ; } // 3. 更现代的写法(将集合转为真数组后使用forEach) // Array.from(oldParagraphs).forEach(p => { //   p.style.color = '#999'; //   p.style.textDecoration = 'line-through'; // }); // 4. 或者直接用for...of(推荐!) // for (let p of oldParagraphs) { //   p.style.color = '#999'; //   p.style.textDecoration = 'line-through'; // }


[root@node0 /] # ll total 16 lrwxrwxrwx  1 root root    7 Apr  2  2021 bin -> usr/bin dr-xr-xr-x  7 root root 4096 Jan  3 08:58 boot drwxr-xr-x  2 root root   42 Jan  3 08:57 dev drwxr-xr-x 88 root root 8192 Jan  3 08:58 etc drwxr-xr-x  2 root root    6 Apr  2  2021 home lrwxrwxrwx  1 root root    7 Apr  2  2021 lib -> usr/lib lrwxrwxrwx  1 root root    9 Apr  2  2021 lib64 -> usr/lib64 drwxr-xr-x  2 root root    6 Apr  2  2021 media drwxr-xr-x  2 root root    6 Apr  2  2021 mnt drwxr-xr-x  2 root root    6 Apr  2  2021 opt dr-xr-xr-x  2 root root    6 Apr  2  2021 proc dr-xr-x---  2 root root  140 Jan  3 08:59 root drwxr-xr-x 16 root root  281 Jan  3 08:56 run lrwxrwxrwx  1 root root    8 Apr  2  2021 sbin -> usr/sbin drwxr-xr-x  2 root root    6 Apr  2  2021 srv dr-xr-xr-x  2 root root    6 Apr  2  2021 sys drwxrwxrwt  2 root root    6 Jan  3 08:58 tmp drwxr-xr-x 12 root root  192 Jan  3 08:55 usr


示例: 百万级数据查找

// 使用 List 查找(O(n))var list = new List(GetCustomers());var target = list.FirstOrDefault(c => c.Id == targetId);// 使用 Dictionary 查找(O(1))var dict = GetCustomers().ToDictionary(c => c.Id);var target = dict.TryGetValue(targetId, out var result) ? result : null;

6. LINQ 性能优化

虽然 LINQ 提供了优雅的查询语法,但在性能关键路径上可能成为瓶颈。

优化策略:

  • 热路径: 用传统循环替代 LINQ
  • 必要使用时: 添加  AsParallel() 并行处理(仅适用于CPU密集型操作)
  • 预编译查询: 对于 EF Core 使用  CompiledQuery

性能对比示例:

// LINQ 方式var activeUsers = users.Where(u => u.IsActive)
                      .Select(u => u.Name)
                      .ToList();// 优化循环方式var activeUsers = new List(users.Count);foreach (var user in users)
{    if (user.IsActive)
        activeUsers.Add(user.Name);
}

7. 数据库访问优化

数据库交互往往是应用性能的主要瓶颈,优化潜力巨大。

关键优化方向:

  1. 查询优化:

    • 只选择必要字段(避免  SELECT *
    • 使用合适的索引
    • 批量操作替代循环单条操作
  2. 连接管理:

    • 使用连接池
    • 合理设置连接超时
    • 及时释放连接资源
  3. 缓存策略:

    • 对稳定数据实施缓存
    • 考虑多级缓存(内存缓存+分布式缓存)

EF Core 优化示例:

// 低效方式foreach (var id in ids)
{    var product = await context.Products.FindAsync(id);    // 处理单个产品}// 高效方式(批量加载)var products = await context.Products
    .Where(p => ids.Contains(p.Id))
    .ToListAsync();// 批量处理

8. 并行处理谨慎使用

并行化能加速CPU密集型任务,但滥用会导致线程争用和额外开销。

适用场景判断:

  • 适合: 独立、计算密集的任务(如图像处理、复杂计算)
  • 避免: I/O 操作、共享资源频繁访问的场景

正确使用示例:

Parallel.For(0, 100, i => 
{
    Compute(i); // 无共享状态的CPU密集型工作});

注意事项:

  • 控制最大并行度( ParallelOptions.MaxDegreeOfParallelism
  • 避免在并行循环中执行阻塞操作
  • 使用线程安全集合( ConcurrentBagConcurrentQueue)处理结果

9. 启动时间优化

缓慢的启动速度会给用户留下负面第一印象,特别是客户端应用。

优化策略:

  • 延迟加载: 将非关键组件初始化推迟到用时
  • 异步初始化: 在后台线程初始化重型组件
  • AOT 编译: 对于 .NET Native 应用减少JIT开销
  • 模块化设计: 按需加载程序集

实现示例:

// 延迟加载示例private Lazy _service = new Lazy(() => new HeavyService());public void ProcessRequest(){
    _service.Value.HandleRequest(); // 访问时初始化}

10. 运行时与依赖项更新

保持 .NET 运行时和库的更新可以免费获得性能提升。

更新优势:

  • 新版运行时通常包含GC优化、JIT改进
  • 框架库持续性能优化(如  System.Text.Json 替代  Newtonsoft.Json
  • 安全补丁和bug修复

更新策略:

  • 定期评估升级到最新LTS版本
  • 使用  Microsoft.Bcl.AsyncInterfaces 等兼容包平滑过渡
  • 测试新版本GC模式(如服务器GC vs 工作站GC)

11. 生产环境性能监控

真实负载下的性能表现可能与开发环境截然不同,持续监控至关重要。

监控重点:

  • 关键指标: 响应时间、错误率、吞吐量
  • 系统资源: CPU、内存、磁盘I/O、网络
  • 应用特定: 缓存命中率、队列长度、数据库查询时间

工具推荐:

  • Application Insights
  • Prometheus + Grafana
  • 自定义性能计数器


final Optional>> keyDecodingFormat =            getKeyDecodingFormat(helper);     final DecodingFormat> valueDecodingFormat =            getValueDecodingFormat(helper);    helper.validateExcept(PROPERTIES_PREFIX);     final ReadableConfig tableOptions = helper.getOptions();    validateTableSourceOptions(tableOptions);    validatePKConstraints(            context.getObjectIdentifier(),            context.getPrimaryKeyIndexes(),            context.getCatalogTable().getOptions(),            valueDecodingFormat);     final StartupOptions startupOptions = getStartupOptions(tableOptions);     final BoundedOptions boundedOptions = getBoundedOptions(tableOptions);     final Properties properties = getKafkaProperties(context.getCatalogTable().getOptions());     // add topic-partition discovery     final Duration partitionDiscoveryInterval =            tableOptions.get(SCAN_TOPIC_PARTITION_DISCOVERY);    properties.setProperty(            KafkaSourceOptions.PARTITION_DISCOVERY_INTERVAL_MS.key(),            Long.toString(partitionDiscoveryInterval.toMillis()));     final DataType physicalDataType = context.getPhysicalRowDataType();     final int [] keyProjection = createKeyFormatProjection(tableOptions, physicalDataType);     final int [] valueProjection = createValueFormatProjection(tableOptions, physicalDataType);     final String keyPrefix = tableOptions.getOptional(KEY_FIELDS_PREFIX).orElse( null );     final Integer parallelism = tableOptions.getOptional(SCAN_PARALLELISM).orElse( null );     return createKafkaTableSource(            physicalDataType,            keyDecodingFormat.orElse( null ),            valueDecodingFormat,            keyProjection,            valueProjection,            keyPrefix,            getTopics(tableOptions),            getTopicPattern(tableOptions),            properties,            startupOptions.startupMode,            startupOptions.specificOffsets,            startupOptions.startupTimestampMillis,            boundedOptions.boundedMode,            boundedOptions.specificOffsets,            boundedOptions.boundedTimestampMillis,            context.getObjectIdentifier().asSummaryString(),


 FactoryUtil.validateFactoryOptions( this , formatOptions);    JsonFormatOptionsUtil.validateDecodingFormatOptions(formatOptions);     final boolean failOnMissingField = formatOptions.get(FAIL_ON_MISSING_FIELD);     final boolean ignoreParseErrors = formatOptions.get(IGNORE_PARSE_ERRORS);     final boolean jsonParserEnabled = formatOptions.get(DECODE_JSON_PARSER_ENABLED);     TimestampFormat timestampOption = JsonFormatOptionsUtil.getTimestampFormat(formatOptions);     return new ProjectableDecodingFormat >() {         @Override         public DeserializationSchema createRuntimeDecoder (                DynamicTableSource.Context context,                DataType physicalDataType,                 int[][] projections) {             final DataType producedDataType =                    Projection.of(projections).project(physicalDataType);             final RowType rowType = (RowType) producedDataType.getLogicalType();             final TypeInformation rowDataTypeInfo =                    context.createTypeInformation(producedDataType);             if (jsonParserEnabled) {                 return new JsonParserRowDataDeserializationSchema (                        rowType,                        rowDataTypeInfo,                        failOnMissingField,                        ignoreParseErrors,                        timestampOption,                        toProjectedNames(                                (RowType) physicalDataType.getLogicalType(), projections));            } else {                 return new JsonRowDataDeserializationSchema (                        rowType,                        rowDataTypeInfo,                        failOnMissingField,                        ignoreParseErrors,                        timestampOption);            }        }         @Override         public ChangelogMode getChangelogMode () {             return ChangelogMode.insertOnly();        }     final DeserializationSchema keyDeserialization =            createDeserialization(context, keyDecodingFormat, keyProjection, keyPrefix);     final DeserializationSchema valueDeserialization =            createDeserialization(context, valueDecodingFormat, valueProjection, null );     final TypeInformation producedTypeInfo =            context.createTypeInformation(producedDataType);     final KafkaSource kafkaSource =            createKafkaSource(keyDeserialization, valueDeserialization, producedTypeInfo);     return new DataStreamScanProvider () {         @Override         public DataStream produceDataStream (                ProviderContext providerContext, StreamExecutionEnvironment execEnv) {             if (watermarkStrategy == null ) {                watermarkStrategy = WatermarkStrategy.noWatermarks();            }            DataStreamSource sourceStream =                    execEnv.fromSource(                            kafkaSource, watermarkStrategy, "KafkaSource-" + tableIdentifier);            providerContext.generateUid(KAFKA_TRANSFORMATION).ifPresent(sourceStream::uid);             return sourceStream;        }         @Override         public boolean isBounded () {             return kafkaSource.getBoundedness() == Boundedness.BOUNDED;        }         @Override         public Optional getParallelism () {             return Optional.ofNullable(parallelism);


if (!commitOffsetsOnCheckpoint) {         return splits;    }     if (splits.isEmpty() && offsetsOfFinishedSplits.isEmpty()) {        offsetsToCommit.put(checkpointId, Collections.emptyMap());    } else {        Map offsetsMap =                offsetsToCommit.computeIfAbsent(checkpointId, id -> new HashMap <>());         // Put the offsets of the active splits.         for (KafkaPartitionSplit split : splits) {             // If the checkpoint is triggered before the partition starting offsets             // is retrieved, do not commit the offsets for those partitions.             if (split.getStartingOffset() >= 0 ) {                offsetsMap.put(                        split.getTopicPartition(),                         new OffsetAndMetadata (split.getStartingOffset()));            }        }         // Put offsets of all the finished splits.        offsetsMap.putAll(offsetsOfFinishedSplits);    }     return splits; } public void notifyCheckpointComplete ( long checkpointId) throws Exception {    LOG.debug( "Committing offsets for checkpoint {}" , checkpointId);    ...    ((KafkaSourceFetcherManager) splitFetcherManager)            .commitOffsets(                    committedPartitions,


try {        consumerRecords = consumer.poll(Duration.ofMillis(POLL_TIMEOUT));    } catch (WakeupException | IllegalStateException e) {         // IllegalStateException will be thrown if the consumer is not assigned any partitions.         // This happens if all assigned partitions are invalid or empty (starting offset >=         // stopping offset). We just mark empty partitions as finished and return an empty         // record container, and this consumer will be closed by SplitFetcherManager.         KafkaPartitionSplitRecords recordsBySplits =                 new KafkaPartitionSplitRecords (                        ConsumerRecords.empty(), kafkaSourceReaderMetrics);        markEmptySplitsAsFinished(recordsBySplits);         return recordsBySplits;    }     KafkaPartitionSplitRecords recordsBySplits =             new KafkaPartitionSplitRecords (consumerRecords, kafkaSourceReaderMetrics);    List finishedPartitions = new ArrayList <>();     for (TopicPartition tp : consumer.assignment()) {         long stoppingOffset = getStoppingOffset(tp);         long consumerPosition = getConsumerPosition(tp, "retrieving consumer position" );         // Stop fetching when the consumer's position reaches the stoppingOffset.         // Control messages may follow the last record; therefore, using the last record's         // offset as a stopping condition could result in indefinite blocking.         if (consumerPosition >= stoppingOffset) {            LOG.debug(                     "Position of {}: {}, has reached stopping offset: {}" ,                    tp,                    consumerPosition,                    stoppingOffset);            recordsBySplits.setPartitionStoppingOffset(tp, stoppingOffset);            finishSplitAtRecord(                    tp, stoppingOffset, consumerPosition, finishedPartitions, recordsBySplits);        }    }     // Only track non-empty partition's record lag if it never appears before    consumerRecords            .partitions()            .forEach(                    trackTp -> {                        kafkaSourceReaderMetrics.maybeAddRecordsLagMetric(consumer, trackTp);                    });    markEmptySplitsAsFinished(recordsBySplits);     // Unassign the partitions that has finished.     if (!finishedPartitions.isEmpty())ttzbw.org.cn {        finishedPartitions.forEach(kafkaSourceReaderMetrics::removeRecordsLagMetric);        unassignPartitions(finishedPartitions);


                this , autoCompleteSchemaRegistrySubject(context));     final Optional>> keyEncodingFormat =            getKeyEncodingFormat(helper); tiantianzbw.com.cn    final EncodingFormat> valueEncodingFormat =            getValueEncodingFormat(helper);    helper.validateExcept(PROPERTIES_PREFIX);sportsurge.cn     final ReadableConfig tableOptions = helper.getOptions();     final DeliveryGuarantee ttkq.pro deliveryGuarantee = validateDeprecatedSemantic(tableOptions);    validateTableSinkOptions(tableOptions);    KafkaConnectorOptionsUtil.validateDeliveryGuarantee(tableOptions);    validatePKConstraints(            context.getObjectIdentifier(),            context.getPrimaryKeyIndexes(),            context.getCatalogTable().getOptions(),            valueEncodingFormat);     final DataType physicalDataType = context.getPhysicalRowDataType();     final int [] keyProjection = createKeyFormatProjection(tableOptions, physicalDataType);     final int [] valueProjection = createValueFormatProjection tiantiankanqiu.mobi (tableOptions, physicalDataType);     final String keyPrefix = tableOptions.getOptional(KEY_FIELDS_PREFIX).orElse( null );     final Integer parallelism = tableOptions.getOptional(SINK_PARALLELISM).orElse( null );     return createKafkaTableSink(            physicalDataType,            keyEncodingFormat.orElse( null ),            valueEncodingFormat,            keyProjection,            valueProjection,            keyPrefix,            getTopics(tableOptions),            getTopicPattern(tableOptions),            getKafkaProperties zhiboapp.com.cn(context.getCatalogTable().getOptions()),            getFlinkKafkaPartitioner(tableOptions, context.getClassLoader()).orElse( null ),            deliveryGuarantee,            parallelism,


ublic List snapshotState ( long checkpointId) throws IOException {     // recycle committed producers    TransactionFinished finishedTransaction;     while ((finishedTransaction = backchannel.poll()) != null ) {        producerPool.recycleByTransactionId(                finishedTransaction.getTransactionId(), finishedTransaction.isSuccess());    }     // persist the  tswnanning.com ongoing transactions into the state; these will not be aborted on restart    Collectionttzb.tw.cn  ongoingTransactions =            producerPool.getOngoingTransactions();    currentProducer =remenzhibo.com  startTransaction(checkpointId + 1 );     return createSnapshots(ongoingTransactions); } private List createSnapshots (        Collection ongoingTransactions) {    List states = new ArrayList <>();     int [] subtaskIds = this .ownedSubtaskIds;     for ( int index = 0 ; index < subtaskIds.length; index++) { footzhibo.com        int ownedSubtask = subtaskIds[index];        states.add(livesoccer.net.cn                 new KafkaWriterState (                        transactionalIdPrefix,                        ownedSubtask,                        totalNumberOfOwnedSubtasks,                        transactionNamingStrategy.getOwnership(),                         // new transactions are only created with the first owned subtask id                        index == 0 ? ongoingTransactions : List.of()));


请使用浏览器的分享功能分享到微信等