一个C#睡前故事

从 前,在南方一块奇异的土地上,有个工人名叫彼得,他非常勤奋,对他的老板总是百依百顺。但是他的老板是个吝啬的人,从不信任别人,坚决要求随时知道彼得的 工作进度,以防止他偷懒。但是彼得又不想让老板呆在他的办公室里站在背后盯着他,于是就对老板做出承诺:无论何时,只要我的工作取得了一点进展我都会及时 让你知道。彼得通过周期性地使用“带类型的引用”(原文为:“typed reference” 也就是delegate??)“回调”他的老板来实现他的承诺,如下:

接口

现在,彼得成了一个特殊的人,他不但能容忍吝啬的老板,而且和他周围的宇宙也有了密切的联系,以至于他认为宇宙对他的工作进度也感兴趣。不幸的是,他必须也给宇宙添加一个特殊的回调函数Advise来实现同时向他老板和宇宙报告工作进度。彼得想要把潜在的通知的列表和这些通知的实现方法分离开来,于是他决定把方法分离为一个接口:

委托

不幸的是,每当彼得忙于通过接口的实现和老板交流时,就没有机会及时通知宇宙了。至少他应该忽略身在远方的老板的引用,好让其他实现了IWorkerEvents的对象得到他的工作报告。(”At least he'd abstracted the reference of his boss far away from him so that others who implemented the IWorkerEvents interface could be notified of his work progress” 原话如此,不理解到底是什么意思:))

他的老板还是抱怨得很厉害。“彼得!”他老板吼道,“你为什么在工作一开始和工作进行中都来烦我?!我不关心这些事件。你不但强迫我实现了这些方法,而且还在浪费我宝贵的工作时间来处理你的事件,特别是当我外出的时候更是如此!你能不能不再来烦我?”

于是,彼得意识到接口虽然在很多情况都很有用,但是当用作事件时,“粒度”不够好。他希望能够仅在别人想要时才通知他们,于是他决定把接口的方法分离为单独的委托,每个委托都像一个小的接口方法:

静态监听者

这样,彼得不会再拿他老板不想要的事件来烦他老板了,但是他还没有把宇宙放到他的监听者列表中。因为宇宙是个包涵一切的实体,看来不适合使用实例方法的委托(想像一下,实例化一个“宇宙”要花费多少资源…..),于是彼得就需要能够对静态委托进行挂钩,委托对这一点支持得很好:

事件

不幸的是,宇宙太忙了,也不习惯时刻关注它里面的个体,它可以用自己的委托替换了彼得老板的委托。这是把彼得的Worker类的的委托字段做成public的一个无意识的副作用。同样,如果彼得的老板不耐烦了,也可以决定自己来激发彼得的委托(真是一个粗鲁的老板):

彼得不想让这些事发生,他意识到需要给每个委托提供“注册”和“反注册”功能,这样监听者就可以自己添加和移除委托,但同时又不能清空整个列表也不能随意激发彼得的事件了。彼得并没有来自己实现这些功能,相反,他使用了event关键字让C#编译器为他构建这些方法:

彼得知道event关键字在委托的外边包装了一个property,仅让C#客户通过+= 和 -=操作符来添加和移除,强迫他的老板和宇宙正确地使用事件。

收获所有结果

到这时,彼得终于可以送一口气了,他成功地满足了所有监听者的需求,同时避免了与特定实现的紧耦合。但是他注意到他的老板和宇宙都为它的工作打了分,但是他仅仅接收了一个分数。面对多个监听者,他想要“收获”所有的结果,于是他深入到代理里面,轮询监听者列表,手工一个个调用:

异步通知:激发 & 忘掉

同时,他的老板和宇宙还要忙于处理其他事情,也就是说他们给彼得打分所花费的事件变得非常长:

很不幸,彼得每次通知一个监听者后必须等待它给自己打分,现在这些通知花费了他太多的工作事件。于是他决定忘掉分数,仅仅异步激发事件:

异步通知:轮询

这使得彼得可以通知他的监听者,然后立即返回工作,让进程的线程池来调用这些代理。随着时间的过去,彼得发现他丢失了他工作的反馈,他知道听取别人的赞扬和努力工作一样重要,于是他异步激发事件,但是周期性地轮询,取得可用的分数。

异步通知:委托

不幸地,彼得有回到了一开始就想避免的情况中来,比如,老板站在背后盯着他工作。于是,他决定使用自己的委托作为他调用的异步委托完成的通知,让他自己立即回到工作,但是仍可以在别人给他的工作打分后得到通知:

宇宙中的幸福

彼得、他的老板和宇宙最终都满足了。彼得的老板和宇宙可以收到他们感兴趣的事件通知,减少了实现的负担和非必需的往返“差旅费”。彼得可以通知他们,而不管他们要花多长时间来从目的方法中返回,同时又可以异步地得到他的结果。彼得知道,这并不*十分*简单,因为当他异步激发事件时,方法要在另外一个线程中执行,彼得的目的方法完成的通知也是一样的道理。但是,迈克和彼得是好朋友,他很熟悉线程的事情,可以在这个领域提供指导。

他们永远幸福地生活下去……<完>

深入sql server中的事务

. 概述... 1
. 并发访问不利影响... 1
1. 脏读(dirty read... 1
2. 不可重复读(nonrepeatable read... 1
3. 幻读(phantom read... 1
. 并发访问控制机制... 2
1. ... 2
2. 行版本控制... 2
. 隔离级别... 2
. 事务... 3
1. 事务的模式... 3
1.1. 显式事务Explicit Transactions... 3
1.2. 自动提交事务Autocommit Transactions... 4
1.3. 隐式事务Implicit Transactions... 4
2. 事务的编程... 5
2.1. Transact-SQL脚本... 5
2.2. ADO.NET应用程序接口... 5

一. 概述

当多个用户同时访问数据库的同一资源时,叫做并发访问。如果并发的访问中有用户对数据进行修改,很可能就会对其他访问同一资源的用户产生不利的影响。可能产生的并发不利影响有以下几类:脏读、不可重复读和幻读。
为了避免并发访问产生的不利影响,sql server设计有两种并发访问的控制机制:锁、行版本控制。

二. 并发访问的不利影响

并发访问,如果没有并发访问控制机制,可能产生的不利影响有以下几种

1. 脏读(dirty read

如果一个用户在更新一条记录,这时第二个用户来读取这条更新了的记录,但是第一个用户在更新了记录后又反悔了,不修改了,回滚了刚才的 更新。这样,导致了第二个用户实际上读取到了一条根本就没有存在过的修改后的记录。如果第一个用户在修改记录期间,把所修改的记录锁住,在修改完成前别的 用户读取不到记录,就能避免这种情况。

2. 不可重复读(nonrepeatable read

第一个用户在一次事务中读取同一记录两次,第一次读取一条记录后,又有第二个用户来访问这条记录,并修改了这条记录,第一个用户第二次 读取这条记录时,得到的是跟第一次不同的数据了。如果第一个用户在两次读取之间锁住要读取的记录,别的用户不能去修改相应的记录就能避免这种情况。

3. 幻读(phantom read

第一个用户在一次事务中两次读取同样满足条件的一批记录,第一次读取一批记录后,又有第二个用户来访问这个表,并在这个表中插入或者删 除了一些记录,第一个用户第二次以同样条件读取这批记录时,可能得到的结果有些记录是在第一次读取时有,第二次的结果中没有了,或者是第二次读取的结果中 有的记录在第一次读取的结果中没有的。如果第一个用户在两次读取之间锁住要读取的记录,别的用户不能去修改相应的记录,也不能增删记录,就能避免这种情 况。

三. 并发访问的控制机制

Sql server中提供了两种并发控制的机制以避免在并发访问时可能产生的不利影响。这两种机制是:

1.

每个事务对所依赖的资源(如行、页或表)请求不同类型的锁。锁可以阻止其他事务以某种可能会导致事务请求锁出错的方式修改资源。当事务不再依赖锁定的资源时,它将释放锁。
根据需要锁定资源的粒度和层次,锁有许多类型,主要的有几种:
表类型:锁定整个表
行类型:锁定某个行
文件类型:锁定某个数据库文件
数据库类型:锁定整个数据库
页类型:锁定8K为单位的数据库页
锁的粒度越小,锁定的范围越小,对别的访问的阻塞就越小,但是所用的锁可能会比较多,锁的消耗就比较大。锁的粒度越大,对别的访问的阻塞可能性就越大,但是所用的锁就会比较少,锁的消耗就比较小。
对于编程人员来说,不用手工去设置控制锁,sql server通过设置事务的隔离级别自动管理锁的设置和控制。
Sql server专门管理锁的是锁管理器,锁管理器通过查询分析器分析待执行的sql语句,来判断语句将会访问哪些资源,进行什么操作,然后结合设定的隔离级别自动分配管理需要用到的锁。

2. 行版本控制

当启用了基于行版本控制的隔离级别时,数据库引擎 将维护修改的每一行的版本。应用程序可以指定事务使用行版本查看事务或查询开始时存在的数据,而不是使用锁保护所有读取。通过使用行版本控制,读取操作阻止其他事务的可能性将大大降低。

四. 隔离级别

上面提到了,sql server通过设置隔离级别来控制锁的使用,从而实现并发法访问控制。
Microsoft SQL Server 数据库引擎支持所有这些隔离级别:
l 未提交读(隔离事务的最低级别,只能保证不读取物理上损坏的数据)
l 已提交读(数据库引擎的默认级别)
l 可重复读
l 可序列化(隔离事务的最高级别,事务之间完全隔离)
这几种隔离级别,对应上面三种并发访问可能产生的不利影响,分别有不同的效果,见下表:

隔离级别

脏读

不可重复读

幻读

未提交读

已提交读

可重复读

快照

可序列化

五. 事务

事务是一个逻辑上的单个的工作单元,其中可以包括许多操作,但是它们在逻辑上是一个整体,要么全部完成,要么全部失败,就好像什么操作都没进行似的。
事务是十分可靠坚固的机制,它能保证事务要么全部完成,要么能全部回滚。
l 锁:使用锁的机制尽可能的保证并发事务的隔离性,避免并发的不利影响。
l 事务日志:事务日志记录着整个事务的所有操作步骤,必要的时候靠日志重新开始事务或者回滚事务。不管出现什么状况,哪怕是网络中断,机器断电,甚至是数据库引擎本身出问题了,通过事务日志都能保证事务的完整性。
l 事务管理:保证一个事务的原子性和数据的一致性。一个事务开始后,它要么成功的完成,要么失败,回滚到事务没开始前的那个状态,事务开始做的所有修改都将复原。

1. 事务的模式

控制事务的开始结束的时间点和事务的范围,有几种事务模式:

1.1.显式事务(Explicit Transactions

显式事务通过sql脚本的BEGIN TRANSACTION或者编程接口(API)的开始事务语句启动事务,以sql脚本的COMMIT ROLLBACK语句提交或回滚事务,编程接口(API)的提交事务或回滚事务语句结束事务。都是通过显式的命令控制事务的开始和结束。
从事务开始到事务提交或者回滚是一个完整的事务周期,事务一旦开始,结果要么是提交,要么是回滚。
如果事务范围内发生错误,错误分为几种类型,不同类型的错误有不同的行为。
l 严重错误
比如,客户端到服务端的网络中断了,或者客户的机器被关机了,数据引擎会被通知数据连接已中断,这样严重的错误数据引擎会自动在服务端回滚整个事务。
l 运行时错误
语句之间的“GO”命令形成的区域为命令批次。数据引擎编译和执行语句是以批次为单位的。一次编译一个批次的命令,编译完成后执行这个批次的命令。存储过程是整个被一次编译的,所以一个存储过程内不分批次,整个过程就是一个批次。
大多数情况下,在一个批次中一条语句发生运行时错误,这个语句将被中止,同时同一批次的所有后续语句也不再执行,但同一批次前面已经执行的命令依然有效。但是可以使用了try…catch捕获错误,并进行相应处理,比如执行事务回滚命令。
有一些运行时错误,比如插入了一个主键重复的记录,只中止当前出错的这条语句,后续的语句照样继续执行。这类错误也能被try…catch捕获到。
为了保证整个事务中,任何语句出现错误都回滚整个事务,最简单的方法是在事务开始前设置SET XACT_ABORT ON,这个设置指示数据引擎,在一个事务中遇到一个错误后,不再执行后续的事务,并回滚整个事务。
l 编译错误
遇到编译错误时,错误语句所在的批次不被执行,并不会受SET XACT_ABORT设置的影响。

1.2.自动提交事务(Autocommit Transactions

这个模式是数据引擎的缺省模式,也是各种编程接口的事务缺省模式。每个单独的语句在完成后被提交,失败后被回滚,编程人员不需要指定任何命令。
每个单独的语句就是一个事务的单位,成功了就提交,这句语句执行错误就回滚这条语句,对其他语句的执行不产生影响。注意这里说的执行错误是运行时错误,如果语句本身有编译错误,比如sql语句的关键词拼写错误了,那么发生编译错误语句所在的那个批次的语句都将不被执行。比如:

USE AdventureWorks;
GO
CREATE TABLE TestBatch (Cola INT PRIMARY KEY, Colb CHAR(3));
GO
INSERT INTO TestBatch VALUES (1, 'aaa');
INSERT INTO TestBatch VALUES (2, 'bbb');
INSERT INTO TestBatch VALUSE (3, 'ccc'); -- Syntax error.
GO
SELECT * FROM TestBatch; -- Returns no rows.
GO

上面这段sql中的第三个insert语句values关键字拼写错误,将导致编译错误,结果是跟这个语句在同一批次的所有三条insert语句都将不被执行。
如果上面第三个insert语句是这样的:
INSERT INTO TestBatch VALUES (1, 'ccc'); -- Duplicate key error.
这将产生一个运行时错误“重复的主键”,这条语句将被回滚,但是不影响前面两条insert语句。从这点可以看出,自动提交模式是每条单独的语句要么完成要么回滚,不影响其他语句的执行。

1.3.隐式事务(Implicit Transactions

SET IMPLICIT_TRANSACTIONS ON命令之后的第一条语句开始,就开始一个新的事务,直到遇到COMMIT ROLLBACK语句结束这个事务,下一个语句又是一个新的事务,同样直到遇到COMMIT ROLLBACK语句结束这个事务。这样形成了一个事务链,直到SET IMPLICIT_TRANSACTIONS OFF结束隐式事务,回到默认的自动提交事务模式。
事务中的行为跟显式事务模式是一致的。
事务体现在connection的水平,一个connection具有事务模式,自动提交模式是connection的缺省事务模式,直到BEGIN TRANSACTION语句开始显式事务模式,或者隐式事务被SET IMPLICIT_TRANSACTIONS ON设置,连接的事务模式被置为显式或隐式事务模式,当显示事务被提交或者回滚,隐式事务被置为关闭后,这个连接的事务模式又被置为自动提交模式。

2. 事务的编程

数据库的编程有两种方式,一种应用程序接口(API),包括ODBCADO ado.net等等编程接口,一种是Transact-SQL脚本,典型的是存储过程。

2.1.Transact-SQL脚本

BEGIN TRANSACTION
标记显式连接事务的起始点。
COMMIT TRANSACTION COMMIT WORK
如果没有遇到错误,可使用该语句成功地结束事务。该事务中的所有数据修改在数据库中都将永久有效。事务占用的资源将被释放。
ROLLBACK TRANSACTION ROLLBACK WORK
用来回滚遇到错误的事务。该事务修改的所有数据都返回到事务开始时的状态。事务占用的资源将被释放。

2.2.ADO.NET应用程序接口

SqlConnection 对象使用 BeginTransaction 方法可以启动一个显式事务。若要结束事务,可以对 SqlTransaction 对象调用 Commit() Rollback() 方法。
下面主要以在存储过程中使用事务的编程详加说明
使用事务的目的是保持一段sql语句执行的完整性,要么全部执行成功,只要有一条语句失败就能完全回滚,回到事务开始前的状态。
事务有起点,即通过BEGIN TRANSACTION启动一个事务,其后执行事务中的各个语句,最后要判断,全部语句执行都成功了,就用COMMIT TRANSACTION提交事务,把事务中执行的语句的结果固定下来;如果事务中有任何错误,要能捕获到错误,并执行ROLLBACK TRANSACTION回滚整个事务。
下面是一段示例代码:

USE AdventureWorks;
BEGIN TRANSACTION;
BEGIN TRY
-- 产生一个违反约束的错误.
DELETE FROM Production.Product
WHERE ProductID = 980;
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() as ErrorState,
ERROR_PROCEDURE() as ErrorProcedure,
ERROR_LINE() as ErrorLine,
ERROR_MESSAGE() as ErrorMessage;
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
END CATCH;
IF @@TRANCOUNT > 0
COMMIT TRANSACTION;

把事务中要执行的语句都放在TRY语句块中,保证所有语句产生错误都能被捕获到。如果事务中的语句一旦产生错误,事务中的后续语句不再被执行,直接跳到CATCH语句块执行,进行出错后的后续处理过程。
CATCH语句块中的最主要的工作就是执行事务回滚语句,以回滚整个事务。也可以进行一些其他辅助性的工作,显示错误,记录错误等等。
如果事务中所有语句都没有出错,顺利执行完成,程序就跳过CATCH语句块,执行最后的COMMIT TRANSACTION提交事务。
经常看到有些人使用@@error来捕获错误,判断是否需要回滚事务,代码大概如下:

BEGIN TRANSACTION;
Select xxx from yyyy; --事务中的sql语句
……
If @@error > 0
ROLLBACK TRANSACTION;
Else
COMMIT TRANSACTION;

这里使用@@error来判断事务中所有的语句是否发生错误,并以此来决定是回滚事务,还是提交事务。实际上这么做是是十分错误的。
第一,@@error是针对每个sql语句执行结果,反映的是当前执行的语句出错状态,当执行到下一句,@@error又被重置以反应下一句语句的执行结果。所以用@@error来判断所有语句是否出错是不行的。
第二,sql语句的运行时错误有两类,一类是语句发生了错误,此语句被中止,但后续语句还能被继续执行,一类是语句发生错误后,一个命令批次中的后续的所有语句也不再被执行。当事务中的语句发生这种错误,那么放在最后的If @@error > 0判断语句都不会有机会被执行了。

这样的做法可能导致很严重的后果:如果事务中有语句产生第一类的错误,后续语句都不被执行,原来设计的ROLLBACK TRANSACTION或COMMIT TRANSACTION都没有机会被执行,就是说被这个事务锁了的资源都将得不到释放,产生的后果是,如果这个事务对某些记录设置了共享锁,那这些记录再 也不能被修改,更惨的是如果这个事务对某些记录设置了排他锁,那么读取这些记录的语句一直会被堵塞,执行不下去了。程序也就死在那里了。
所以,在事务中用来捕获语句错误还是需要使用try…catch语句块。

略懂 HTML 的朋友都知道,如果想让一个链接在新窗口中打开,通常的做法是利用 target=”_blank” 来设定 a 标签。例如:

  1. <a href="http://jennal.cn" target="_blank">Jennal.cn</a>

这种做法确实比较方便,但在 XHTML 1.0 Strict 中去掉了 target 属性,也就是说我们不能再利用 target 属性来控制链接的行为。虽然当今流行的浏览器在 XHTML 1.0 Strict 甚至 XHTML 1.1 下扔能正确执行 target=”_blank”,但这样的代码毕竟是不规范的,不推荐使用。
很自然,我们会想到用 Javascript 来解决这个问题,我通常是使用下面的方法:

  1. <a href="http://jennal.cn" onclick="window.open(this.href);return false;">Jennal.cn</a>

这样虽然可以满足要求,但是当链接很多的时候,代码就显得有些臃肿了。为了简化代码,我们应该用 DOM Event / DHTML 的方法来解决这个问题。今天我恰好在做一个网页,需要使用 XHTML 1.0 Strict,准备自己写一个这样的 Javascript。不过幸好我养成了万事先 Google 的坏习惯,还真让我找到了一个很完善的 Javascript 弹窗代码,这段代码不仅写的漂亮,通用性强,而且还考虑到了当今流行的浏览器按下组合键点击链接的情况,已经非常完善了。代码源头在这里,作者提供了源码下载和一个演示
这段代码的通用性非常强,作者原文中举例写的也很详细,其实最简单的用法就是为需要开新窗的链接添加 rel=”external” 属性,当然,你也可以自己定制根据 class 或其它什么属性来判断。

  1. <a href="http://jennal.cn" rel="external">Jennal.cn</a>

当然,在网页设计中,弹出新窗口在多数情况下应该尽量避免,只在可以提高用户体验的情况下才需要使用。此外,由于有些 Pop Window Blocker 会拦截 Javascript 弹出窗口,我们可以修改这段代码,通过判断窗口是否成功建立来给出关闭弹窗过滤的提示,相信可以使用户体验提升不少。
转自:http://blog.istef.info/2007/05/17/open-a-new-window-when-click-a-link-under-xhtml-10-strict/

php下载文件的函数

转自:http://www.ruofeel.cn/329.html

使用truss、strace或ltrace诊断软件的"疑难杂症"

进程无法启动,软件运行速度突然变慢,程序的"Segment Fault"等等都是让每个Unix系统用户头痛的问题,本文通过三个实际案例演示如何使用truss、strace和ltrace这三个常用的调试工具来快速诊断软件的"疑难杂症"。
truss 和strace用来跟踪一个进程的系统调用或信号产生的情况,而 ltrace用来跟踪进程调用库函数的情况。truss是早期为System V R4开发的调试程序,包括Aix、FreeBSD在内的大部分Unix系统都自带了这个工具;而strace最初是为SunOS系统编写的,ltrace 最早出现在GNU/Debian Linux中。这两个工具现在也已被移植到了大部分Unix系统中,大多数Linux发行版都自带了strace和ltrace,而FreeBSD也可通 过Ports安装它们。
你不仅可以从命令行调试一个新开始的程序,也可以把truss、strace或ltrace绑定到一个已有的PID上来调试一个正在运行的程序。三个调试工具的基本使用方法大体相同,下面仅介绍三者共有,而且是最常用的三个命令行参数:

使用上述三个参数基本上就可以完成大多数调试任务了,下面举几个命令行例子:

三个调试工具的输出结果格式也很相似,以strace为例:


每一行都是一条系统调用,等号左边是系统调用的函数名及其参数,右边是该调用的返回值。 truss、strace和ltrace的工作原理大同小异,都是使用ptrace系统调用跟踪调试运行中的进程,详细原理不在本文讨论范围内,有兴趣可以参考它们的源代码。


举两个实例演示如何利用这三个调试工具诊断软件的"疑难杂症":
案例一:运行clint出现Segment Fault错误
操作系统:FreeBSD-5.2.1-release
clint是一个C++静态源代码分析工具,通过Ports安装好之后,运行:


在Unix系统中遇见"Segmentation Fault"就像在MS Windows中弹出"非法操作"对话框一样令人讨厌。OK,我们用truss给clint"把把脉":


我们用truss跟踪clint的系 统调用执行情况,并把结果输出到文件clint.truss,然后用tail查看最后几行。注意看clint执行的最后一条系统调用(倒数第五 行):stat("/root/.clint/plugins",0xbfbfe680) ERR#2 'No such file or directory',问题就出在这里:clint找不到目录"/root/.clint/plugins",从而引发了段错误。怎样解决?很简单: mkdir -p /root/.clint/plugins,不过这次运行clint还是会"Segmentation Fault"9。继续用truss跟踪,发现clint还需要这个目录"/root/.clint/plugins/python",建好这个目录后 clint终于能够正常运行了。
案例二:vim启动速度明显变慢
操作系统:FreeBSD-5.2.1-release
vim 版本为6.2.154,从命令行运行vim后,要等待近半分钟才能进入编辑界面,而且没有任何错误输出。仔细检查了. vimrc和所有的vim脚本都没有错误配置,在网上也找不到类似问题的解决办法,难不成要hacking source code?没有必要,用truss就能找到问题所在:


这里-D参数的作用是:在每行输出前加上相对时间戳,即每执行一条系统调用所耗费的时间。我们只要关注哪些系统调用耗费的时间比较长就可以了,用less仔细查看输出文件vim.truss,很快就找到了疑点:

vim 试图连接10.57.18.27这台主机的6000端口(第四行的 connect()),连接失败后,睡眠一秒钟继续重试(第6行的nanosleep())。以上片断循环出现了十几次,每次都要耗费一秒多钟的时间,这 就是vim明显变慢的原因。可是,你肯定会纳闷:"vim怎么会无缘无故连接其它计算机的6000端口呢?"。问得好,那么请你回想一下6000是什么服 务的端口?没错,就是X Server。看来vim是要把输出定向到一个远程X Server,那么Shell中肯定定义了DISPLAY变量,查看.cshrc,果然有这么一行:setenv DISPLAY ${REMOTEHOST}:0,把它注释掉,再重新登录,问题就解决了。
案例三:用调试工具掌握软件的工作原理
操作系统:Red Hat Linux 9.0
用 调试工具实时跟踪软件的运行情况不仅是诊断软件"疑难杂症"的有效的手段,也可帮助我们理清软件的"脉络",即快速掌握软件的运行流程和工作原理,不失为 一种学习源代码的辅助方法。下面这个案例展现了如何使用strace通过跟踪别的软件来"触发灵感",从而解决软件开发中的难题的。
大 家都知道,在进程内打开一个文件,都有唯一一个文件描述符(fd:file descriptor)与这个文件对应。而本人在开发一个软件过程中遇到这样一个问题:已知一个fd ,如何获取这个fd所对应文件的完整路径?不管是Linux、FreeBSD或是其它Unix系统都没有提供这样的API,怎么办呢?我们换个角度思考: Unix下有没有什么软件可以获取进程打开了哪些文件?如果你经验足够丰富,很容易想到lsof,使用它既可以知道进程打开了哪些文件,也可以了解一个文 件被哪个进程打开。好,我们用一个小程序来试验一下lsof,看它是如何获取进程打开了哪些文件。

将testlsof放入后台运行,其pid为3125。命令lsof -p 3125查看进程3125打开了哪些文件,我们用strace跟踪lsof的运行,输出结果保存在lsof.strace中:

我们以"/tmp/foo"为关键字搜索输出文件lsof.strace,结果只有一条:


原 来lsof巧妙的利用了/proc/nnnn/fd/目录(nnnn为 pid):Linux内核会为每一个进程在/proc/建立一个以其pid为名的目录用来保存进程的相关信息,而其子目录fd保存的是该进程打开的所有文 件的fd。目标离我们很近了。好,我们到/proc/3125/fd/看个究竟:

答案已经很明显了:/proc/nnnn/fd/目录下的每一个fd文件都是符号链接,而此链接就指向被该进程打开的一个文件。我们只要用readlink()系统调用就可以获取某个fd对应的文件了,代码如下:

出 于安全方面的考虑,在FreeBSD 5 之后系统默认已经不再自动装载proc文件系统,因此,要想使用truss或strace跟踪程序,你必须手工装载proc文件系统:mount -t procfs proc /proc;或者在/etc/fstab中加上一行:

How to choose and use source code analysis tools

January 20, 2009 (CSO) The high cost of finding and patching application flaws is well known. Wouldn't it be cheaper to write secure code in the first place?
One of the fastest growing areas in the software security industry is source code analysis tools, also known as static analysis tools. These tools review source code (or in Veracode Inc.'s case, binary code) line by line to detect security vulnerabilities and provide advice on how to remediate problems they find -- ideally before the code goes into production. (See "How to Evaluate and Use Web Application Scanners" for tools and services that search for vulnerabilities as an outside attacker might.)
The entire software security market was worth about $300 million in 2007, according to Gary McGraw, chief technology officer at Cigital Inc., a software security and quality consulting firm in Dulles, Va. McGraw estimates that the tools portion of that market doubled from 2006 to 2007 to about $180 million. About half of that is attributable to static analysis tools, which amounted to about $91.9 million, he says.
And no wonder; according to Gartner Inc., close to 90% of software attacks are aimed at the application layer. If security were integrated earlier in the software development life cycle, flaws would be uncovered earlier, reducing costs and increasing efficiency compared with removing defects later through patches or never finding them at all, says Diane Kelley, founder of SecurityCurve, a security consultancy in Amherst, N.H. "Although there is no replacement for security-aware design and a methodical approach to creating more-secure applications, code-scanning tools are a very useful addition to the process," she says.
Despite the high degree of awareness, many companies are behind the curve in their use of static analysis tools, Kelley says, possibly because of the big process changes that these tools entail.

Key decisions in source code analysis

1. Should you start with static tools or dynamic tools, or use both?

In addition to static analysis, which reviews code before it goes live, there are also dynamic analysis tools, which conduct automated scans of production Web applications to unearth vulnerabilities. In other words, dynamic tools test from the outside in, while static tools test from the inside out, says Neil McDonald, an analyst at Gartner.
Many organizations start with dynamic testing, just to get a quick assessment of where their applications stand, McDonald says. In some cases, the groups that start this initiative are in security or audit compliance departments and don't have access to source code. The natural second step is to follow up with static analyzers, enabling developers to fix the problems found by dynamic analysis tools. Some companies continue using both because each type yields different findings.
An important differentiator between the two types is that static analyzers provide the exact line of code causing the problem, while dynamic analyzers just identify the Web page causing the issue. That's why some vendors offer integration between the two types of tools.
Dynamic assessment tools "tend to be brute force," according to the chief scientist at a large software vendor who asked not to be identified. "You have to hit every parameter to find the vulnerabilities, whereas static tools investigate the whole landscape of the application." He recently chose a code scanner from Ounce Labs Inc. after outsourcing the work to Cigital since 2006. He became interested in application security when customers began requiring PCI DSS certification. He plans to add in dynamic testing in the future, but the static analysis tool is the cornerstone of his application security program.

2. Do you have the source code?

Most static analyzers scan source code, but what happens if you want to analyze third-party software or code written so long ago that you only have the executable? In that case, Veracode offers binary code scanning through a software-as-a-service system. "A vendor may not be willing to give you source code, but they will give you executables or binary," Kelley says.
At the Federal Aviation Administration, Michael Brown, director of the Office of Information Systems Security, says he chose to use Veracode's services this year because of the amount of vendor-written code the FAA anticipated to use as a result of its modernization of the national airspace system. Brown says he wanted to ensure that the code was not only functionally correct and of high quality but also secure. He wanted a service rather than a tool to reduce the need for training. So far, the results have been eye-opening, he says. "A lot of the code didn't really take security into account," he says. "There were cases of memory leaks, cross-site scripting and buffer overflows that could have been a cause for concern."

3. What do you currently use for software quality?

Some tool vendors, such as Coverity, Klocwork, Parasoft and Compuware, originated in the quality-testing arena and have added security capabilities, as opposed to vendors like Ounce and Fortify Software Inc., whose tools were solely designed for security. It's worthwhile to check into the quality tools you already use to see if you can leverage the existing relationship and tool familiarity. You should also consider whether it's important to your organization to have the two functions merged into one tool in the long term, McDonald says. (See "Penetration Testing: Dead in 2009?" for more on the interplay of quality assurance and vulnerability detection.)

Source code analysis tools: Evaluation criteria

  • Support for the programming languages you use. Some companies support mobile devices, while others concentrate on enterprise languages such as Java, .Net, C, C++ and even Cobol.
  • Good bug-finding performance, using a proof-of-concept assessment. Hint: Use an older build of code you had issues with and see how well the product catches bugs you had to find manually. Look for both thoroughness and accuracy. Fewer false positives means less manual work.
  • Internal knowledge bases that provide descriptions of vulnerabilities and remediation information. Test for easy access and cross-referencing to discovered findings.
  • Tight integration with your development platforms. Long-term, you'll likely want developers to incorporate security analysis into their daily routines.
  • A robust finding/suppression mechanism to prevent false positives from reoccurring once you've verified them as a non-issue.
  • The ability to easily define additional rules so the tool can enforce internal coding policies.
  • A centralized reporting component if you have a large team of developers and managers who want access to findings, trending and overview reporting.

Do's and don'ts of source code analysis

Don't underestimate the adoption time required. Most static analysis projects are initiated by security or compliance staffers, not developers, who may not immediately embrace these tools. Before developers get involved, McDonald suggests doing the legwork on new processes, planning integration with other workflows like bug-tracking systems and development environments, and tuning the tool to your unique coding needs. "Don't deploy to every developer at once," he adds. "Ideally, you'll get someone who wants to take on a competency role for security testing."
The chief scientist at the large software vendor has developed an application security-awareness program that includes training on common vulnerabilities through podcasts and videocasts. Once he builds up awareness, he plans to educate developers on secure coding standards. To complete the circle, he'll introduce Ounce's static code analysis tool to enforce the standards and catch vulnerabilities "so it's a feedback loop," he says. (See "Rob Cheyne Pushes for Developer Security Awareness" for a look at a similar agenda.
Do consider using more than one tool. Collin Park, a senior engineer at NetApp Inc., says the company uses two code-analysis tools: Developers run Lint on their desktops, and the company uses Coverity each night to scan all completed code. "They catch different things," he explains. NetApp began using these tools when its customer base shifted to enterprise customers that had more stringent requirements. While Coverity is better at spotting vulnerabilities such as memory leaks, Lint catches careless coding errors that developers make and seems to run faster on developer desktops, Park says.
According to Kelley, organizations typically implement static analyzers at two stages of the development process: within the development environment, so developers can check their own code as they're writing, and within the code repository, so it can be analyzed at check-in time. The chief scientist uses this method. "In the first scan, if the engineer takes every finding and suppresses them, a milestone scan will catch those and generate a report," he says.
Do analyze pricing. Vendors have different pricing strategies, McDonald says. For instance, while all continuously add information to their libraries about the latest vulnerabilities, some charge extra for this, while others include it in the maintenance fee, he says. In addition, some vendors charge per seat, which can get expensive for large shops and may even seem wasteful for companies that don't intend to run the scanner every day. Other vendors charge per enterprise license. In addition, some vendors charge for additional languages, while others charge one price for any language they support, McDonald says.
Do plan to amend your processes. Tools are no replacement for strong processes that ensure application security from the beginning, starting with defining requirements, which should focus on security as much as functionality, according to Kelley. For instance, a tool won't tell you whether a piece of data should be encrypted to comply with the Payment Card Industry Data Security Standard. "If a company just goes out and buys one of these tools and continues to do everything else the same, they won't get to the next level," she says.
The chief scientist says it's also important to determine what will happen when vulnerabilities are found, especially because the tools can generate thousands of findings. "Does the workflow allow them to effectively analyze, triage, prioritize or dispose of the findings?" he notes. He is working with Ounce to integrate the system better with his current bug-tracking system, which is Quality Center. "It would be great to right-click on the finding to automatically inject it into the bug-tracking system," he says.
At NetApp, Park has reworked processes to ensure developers fix flagged vulnerabilities. As part of submitting code, developers do a test build, which must succeed, or it can't be checked in. Then, when they check in code, an automated process starts an incremental build. If that build fails, a bug report is filed, complete with the names of developers who checked in code before the last build. "Developers are trained to treat a build failure as something they have to look at now,'" Park says.
NetApp also created a Web-based chart that's automatically updated each night to track which managers have teams that were issued Lint or Coverity warnings and whether they were cleared.
Do retain the human element. While the tools will provide long lists of vulnerabilities, it takes a skilled professional to interpret and prioritize the results. "Companies don't have time to fix every problem, and they may not need to," Kelley says. "You have to have someone who understands what is and is not acceptable, especially in terms of a time-vs.-perfection trade-off.'"
The chief scientist calls this "truly an art form" that requires a competent security engineer. "When the tool gives you 10,000 findings, you don't want someone trying to fix all those," he says. "In fact, 10,000 may turn out to just be 500 or 100 vulnerabilities,"
Park points out an instance when the Coverity tool once found what it called "a likely infinite loop." On first glance, the developer could see there was no loop, but after a few more minutes of review, he detected something else wrong with the code. "The fact that you get the tool to stop complaining is not an indication you've fixed anything," he says.
Don't anticipate a short scan. NetApp runs scans each night, and because it needs to cover thousands of files and millions of lines of code, it takes roughly 10 hours to complete a code review. The rule of thumb, according to Coverity, is for each hour of build time, allow for two hours for the analysis to be complete. Coverity also enables companies to do incremental runs so that users don't have to scan the entire code base but just what they've changed in the nightly build.
Do consider reporting flexibility. At the FAA, Brown gets two reports: an executive summary that provides a high-level view of vulnerabilities detected and even provides a security "score," and a more detailed report that pinpoints which lines of code look troublesome and the vulnerability that was detected. In the future, Brown would like to build into vendor contracts the requirement that they meet a certain security score for all code they develop for the FAA.
Don't forget the business case. When Brown first wanted to start reviewing code, he met with some resistance from managers who wanted a defined business need. "You've got program managers with schedules to meet, and they can view this as just another bump in the road that's going to keep them from making their milestones," he says.
Brown created the business case by looking to independent sources such as Gartner and Burton Group for facts and figures about code vulnerability, and he also ran some reports on how much time the FAA was dedicating to patch management.
The chief scientist justified the cost of the Ounce tool by taking the total cost of the product and comparing that with the effort involved in a manual review. "With millions of lines of code, imagine how many engineers it would take to do that -- and, by the way, we want to do it every week," he says. "The engineers would fall down dead of boredom."
Mary Brandel is a freelance writer based outside of Boston.

Javascript 刷新页面的几种方法

1    history.go(0)
2    location.reload()
3    location=location
4    location.assign(location)
5    document.execCommand('Refresh')
6    window.navigate(location)
7    location.replace(location)
8    document.URL=location.href

[转]国际:写出漂亮代码的七种方法

首先我想说明我本文阐述的是纯粹从美学的角度来写出代码,而非技术、逻辑等。以下为写出漂亮代码的七种方法:
1, 尽快结束 if语句
例如下面这个JavaScript语句,看起来就很恐怖:

但如果这么写就好看得多:

你可能会很不喜欢第二种的表述方式,但反映出了迅速返回if值的思想,也可以理解为:避免不必要的else陈述。
2, 如果只是简单的布尔运算(逻辑运算),不要使用if语句
例如:

可以写为:

3, 使用空白,这是免费的
例如:

很多开发者不愿意使用空白,就好像这要收费一样。我在此并非刻意地添加空白,粗鲁地打断代码的连贯性。在实际编写代码的过程中,会很容易地发现在什么地方加入空白,这不但美观而且让读者易懂,如下:

4, 不要使用无谓的注释
无谓的注释让人费神,这实在很讨厌。不要标出很明显的注释。在以下的例子中,每个人都知道代码表达的是“students id”,因而没必要标出。

5, 不要在源文件中留下已经删除的代码,哪怕你标注了
如果你使用了版本控制,那么你就可以轻松地找回前一个版本的代码。如果别人大费周折地读了你的代码,却发现是要删除的代码,这实在太恨人了。

6,不要有太长的代码
看太长的代码实在太费劲,尤其是代码本身的功能又很小。如下:

#
我并不是说非要坚持70个字符以内,但是一个比较理想的长度是控制在120个字符内。如果你把代码发布在互联网上,用户读起来就很困难。
7,不要在一个功能(或者函数内)有太多代码行
我的一个老同事曾经说Visual C++很臭,因为它不允许你在一个函数内拥有超过10,000行代码。我记不清代码行数的上限,不知道他说的是否正确,但我很不赞成他的观点。如果一个函数超过了50行,看起来有多费劲你知道么,还有没完没了的if循环,而且你还的滚动鼠标前后对照这段代码。对我而言,超过35行的代码理解起来就很困难了。我的建议是超过这个数字就把一个函数代码分割成两个。
转自:http://news.csdn.net/n/20081217/121810.html

搞定一个很纠结的json问题,记录一下

今天被json编码中文搞死了。。。
json_encode中文的时候,会把每个中文字符encode成“\uxxxx”
而存进数据库的时候,“\”被屏蔽了,直接变成"uxxxx"
郁闷,我还以为是json_decode的问题。。。在网上查了一个下午资料都没解决
直到发现网上的文章都说json_encode是把中文encode成“\uxxxx”,我才顿悟。。。
经过一番实验,才发现问题出在存储到数据库的环节。。。
特此纪念一下
至于解决办法,当然是让"\"保留啦,把"\" replace成"\\"就可以啦~
还有一个问题,差点忘了,就是json_decode以后返回的不是array,而是stdclass,有一个很郁闷的问题,如果hash带下划线,如json['xx_yy']会出现很奇怪的问题,页面什么都不显示,也不报错,差点晕死。有一个办法解决,就是先foreach一下,把带下划线的存到一个array里面,或者直接整个转存成array。
//感谢软工的回复
由于已经是2年前写的博客,也没有写实际场景,我已经忘记json_decode是在什么场景下出现的问题。
如果我没有记错的话,当时json_decode还没有第二个参数。

PHP get remote ip address