2009年12月30日星期三

VMware三种工作模式(Bridged ,host-only,NAT)

VMWare提供了三种工作模式,它们是bridged(桥接模式)、NAT(网络地址转换模式)和host-only(主机模式)。要想在网络管理和维护中合理应用它们,你就应该先了解一下这三种工作模式。 l bridged(桥接模式) 在这种模式下,VMWare虚拟出来的操作系统就像是局域网中的一台独立的主机,它可以访问网内任何一台机器。在桥接模式下,你需要手工为虚拟系统配置IP地址、子网掩码,而且还要和宿主机器处于同一网段,这样虚拟系统才能和宿主机器进行通信。同时,由于这个虚拟系统是局域网中的一个独立的主机系统,那么就可以手工配置它的TCP/IP配置信息,以实现通过局域网的网关或路由器访问互联网。使用桥接模式的虚拟系统和宿主机器的关系,就像连接在同一个Hub上的两台电脑。想让它们相互通讯,你就需要为虚拟系统配置IP地址和子网掩码,否则就无法通信。如果你想利用VMWare在局域网内新建一个虚拟服务器,为局域网用户提供网络服务,就应该选择桥接模式。

l host-only(主机模式)   在某些特殊的网络调试环境中,要求将真实环境和虚拟环境隔离开,这时你就可采用host-only模式。在host-only模式中,所有的虚拟系统是可以相互通信的,但虚拟系统和真实的网络是被隔离开的。提示:在host-only模式下,虚拟系统和宿主机器系统是可以相互通信的,相当于这两台机器通过双绞线互连。在host-only模式下,虚拟系统的TCP/IP配置信息(如IP地址、网关地址、DNS服务器等),都是由VMnet1(host-only)虚拟网络的DHCP服务器来动态分配的。如果你想利用VMWare创建一个与网内其他机器相隔离的虚拟系统,进行某些特殊的网络调试工作,可以选择host-only模式。 ------------------------
原创部分: 1.在虚拟机上安装操作系统的时候,系统的IP设置为192.168.0.99,DNS:192.168.0.1 2.修改虚拟机的VMnet1的ip为:192.168.0.1 3.在你可访问网络的那块网卡上设置Internet连接共享,具体设置方式为:属性-->高级-->连接共享,然后选择VMnet1,将网络共享给它 4.在本机上ping一下192.168.0.99,如果能ping通,就说明你设置正确了。 5.进入你虚拟机中的linux操作系统,尽情的网上冲浪吧 ------------------------

NAT(网络地址转换模式) 使用NAT模式,就是让虚拟系统借助NAT(网络地址转换)功能,通过宿主机器所在的网络来访问公网。也就是说,使用NAT模式可以实现在虚拟系统里访问互联网。NAT模式下的虚拟系统的TCP/IP配置信息是由VMnet8(NAT)虚拟网络的DHCP服务器提供的,无法进行手工修改,因此虚拟系统也就无法和本局域网中的其他真实主机进行通讯。采用NAT模式最大的优势是虚拟系统接入互联网非常简单,你不需要进行任何其他的配置,只需要宿主机器能访问互联网即可。如果你想利用VMWare安装一个新的虚拟系统,在虚拟系统中不用进行任何手工配置就能直接访问互联网,建议你采用NAT模式。   提示:以上所提到的NAT模式下的VMnet8虚拟网络,host-only模式下的VMnet1虚拟网络,以及bridged模式下的VMnet0虚拟网络,都是由VMWare虚拟机自动配置而生成的,不需要用户自行设置。VMnet8和VMnet1提供DHCP服务,VMnet0虚拟网络则不提供

2009年12月14日星期一

gitignore(5) Manual Page


gitignore(5) Manual Page


NAME




gitignore -
Specifies intentionally untracked files to ignore




SYNOPSIS



$GIT_DIR/info/exclude, .gitignore



DESCRIPTION



A gitignore file specifies intentionally untracked files that
git should ignore.
Note that all the gitignore files really concern only files
that are not already tracked by git;
in order to ignore uncommitted changes in already tracked files,
please refer to the git update-index --assume-unchanged
documentation.


Each line in a gitignore file specifies a pattern.
When deciding whether to ignore a path, git normally checks
gitignore patterns from multiple sources, with the following
order of precedence, from highest to lowest (within one level of
precedence, the last matching pattern decides the outcome):





  • Patterns read from the command line for those commands that support
    them.





  • Patterns read from a .gitignore file in the same directory
    as the path, or in any parent directory, with patterns in the
    higher level files (up to the toplevel of the work tree) being overridden
    by those in lower level files down to the directory containing the file.
    These patterns match relative to the location of the
    .gitignore file. A project normally includes such
    .gitignore files in its repository, containing patterns for
    files generated as part of the project build.





  • Patterns read from $GIT_DIR/info/exclude.





  • Patterns read from the file specified by the configuration
    variable core.excludesfile.




Which file to place a pattern in depends on how the pattern is meant to
be used. Patterns which should be version-controlled and distributed to
other repositories via clone (i.e., files that all developers will want
to ignore) should go into a .gitignore file. Patterns which are
specific to a particular repository but which do not need to be shared
with other related repositories (e.g., auxiliary files that live inside
the repository but are specific to one user's workflow) should go into
the $GIT_DIR/info/exclude file. Patterns which a user wants git to
ignore in all situations (e.g., backup or temporary files generated by
the user's editor of choice) generally go into a file specified by
core.excludesfile in the user's ~/.gitconfig.


The underlying git plumbing tools, such as
git-ls-files and git-read-tree, read
gitignore patterns specified by command-line options, or from
files specified by command-line options. Higher-level git
tools, such as git-status and git-add,
use patterns from the sources specified above.


Patterns have the following format:





  • A blank line matches no files, so it can serve as a separator
    for readability.





  • A line starting with # serves as a comment.





  • An optional prefix ! which negates the pattern; any
    matching file excluded by a previous pattern will become
    included again. If a negated pattern matches, this will
    override lower precedence patterns sources.





  • If the pattern ends with a slash, it is removed for the
    purpose of the following description, but it would only find
    a match with a directory. In other words, foo/ will match a
    directory foo and paths underneath it, but will not match a
    regular file or a symbolic link foo (this is consistent
    with the way how pathspec works in general in git).





  • If the pattern does not contain a slash /, git treats it as
    a shell glob pattern and checks for a match against the
    pathname without leading directories.





  • Otherwise, git treats the pattern as a shell glob suitable
    for consumption by fnmatch(3) with the FNM_PATHNAME flag:
    wildcards in the pattern will not match a / in the pathname.
    For example, "Documentation/*.html" matches
    "Documentation/git.html" but not
    "Documentation/ppc/ppc.html". A leading slash matches the
    beginning of the pathname; for example, "/*.c" matches
    "cat-file.c" but not "mozilla-sha1/sha1.c".




An example:




    $ git status
[...]
# Untracked files:
[...]
# Documentation/foo.html
# Documentation/gitignore.html
# file.o
# lib.a
# src/internal.o
[...]
$ cat .git/info/exclude
# ignore objects and archives, anywhere in the tree.
*.[oa]
$ cat Documentation/.gitignore
# ignore generated html files,
*.html
# except foo.html which is maintained by hand
!foo.html
$ git status
[...]
# Untracked files:
[...]
# Documentation/foo.html
[...]


Another example:




    $ cat .gitignore
vmlinux*
$ ls arch/foo/kernel/vm*
arch/foo/kernel/vmlinux.lds.S
$ echo '!/vmlinux*' >arch/foo/kernel/.gitignore


The second .gitignore prevents git from ignoring
arch/foo/kernel/vmlinux.lds.S.



Documentation



Documentation by David Greaves, Junio C Hamano, Josh Triplett,
Frank Lichtenheld, and the git-list <[email protected]>.




GIT



Part of the git(1) suite


The basic steps for using git in debian/ubuntu


























The basic steps for using git in debian/ubuntu








1. install a git server & initialize a repository

install git-core







$ apt-get install git-core


check the path of git-shell







$ which git-shell
/usr/bin/git-shell


add some users which can access the git server via the git-shell







$ sudo useradd -m -s /usr/bin/git-shell git
$ sudo passwd git
(you can add more users as above)


initialize a git repository: foo







$ sudo mkdir /home/git/foo
$ cd /home/git/foo
$ git --bare init
$ sudo chown -R git:git /home/git/foo


first commit(create the master branch)







$ mkdir foo
$ cd foo
$ git init
$ touch README
$ git add README
$ git commit -m "added README"
$ git remote add origin [email protected]:/home/git/foo
$ git push origin master
$ cd ..; rm -rf foo


2. using git as a client

firstly, you also need to install git-core







$ sudo apt-get git-core


clone the repository from the git server







$ git clone [email protected]:/home/git/foo


do a little configuration







$ cd foo
$ git config --global user.name "git"
$ git config --global user.email "git's email"


upgrade(synchronization)







$ git pull


add a new file or directory







$ touch file
$ git add file


add some comments to the above operation







$ git commit -a -m "some comments here..."


submit your operation to the git server







$ git push


that's all, have fun :-)

References & Links

[1] Remote Git Repos on Ubuntu: The Right Way
http://blog.drewolson.org/2008/05/remote-git-repos-on-ubuntu-right-way.html
[2] Installing Git on a server (Ubuntu or Debian)
http://www.urbanpuddle.com/articles/2008/07/11/installing-git-on-a-server-ubuntu-or-debian
[3] Hosting Git repositories, The Easy (and Secure) Way
http://scie.nti.st/2007/11/14/hosting-git-repositories-the-easy-and-secure-way



2009年12月4日星期五

VirtualBox NAT设置和端口转发

这是我对 VirtualBox 自带帮助关于 NAT 设置部分的翻译,翻的不当处,请批评指正,我不是英语专业。未完,待续。转载我的文章,请注明出处,非常感谢。

1、 Network Address Translation (NAT)

Network Address Translation (NAT) is the simplest way of accessing an external network from a virtual machine. Usually, it does not require any configuration on the host network and guest system. For this reason, it is the default networking mode in VirtualBox.

网络地址转换( NAT )是最简单的方法从一个虚拟机访问外部网。通常,它并不要求在主机网络和客户机上做任何配置。基于这个原因,它是默认的网络模式 。

A virtual machine with NAT enabled acts much like a real computer that connects to the Internet through a router. The “router”, in this case, is the VirtualBox networking engine, which maps traffic from and to the virtual machine transparently. The disadvantage of NAT mode is that, much like a private network behind a router, the virtual machine is invisible and unreachable from the outside internet; you cannot run a server this way unless you set up port forwarding (described below).

设置为通过 NAT 方式连接的一台虚拟机能像一台真正的计算机一样访问互联网,主机就是一只路由器。 在这种方式下,通过 VirtualBox 网络引擎,虚拟机透明地映射到外部网络。 NAT 方式不方便是,很像是在路由器之后的一个专用网络,从外部互联网看来,虚拟机是无形和不能到达的; 您不可能在虚拟机上运行一个服务器,因为外部网络无法访问通过 NAT 方式连接的内部机,除非您设定了端口转发(下述)。

The virtual machine receives its network address and configuration on the private network from a DHCP server that is integrated into VirtualBox. The address which the virtual machine receives is usually on a completely different network to the host.As more than one card of a virtual machine can be set up to use NAT, the first card is connected to the private network 10.0.2.0, the second card to the network 10.0.3.0 and so on.

虚拟机从一个 VirtualBox 整合的 DHCP 服务器得到私有的网址。这个网址对主机来说是一个完全不同的网络。一台虚拟机的多个网卡可以被设定使用 NAT, 第一个网卡连接了到专用网 10.0.2.0,第二个网卡连接到专用网络 10.0.3.0,等等。默认得到的客户端ip(IP Address)是10.0.2.15,网关(Gateway)是10.0.2.2,域名服务器(DNS)是10.0.2.3,可以手动参考这个进行修改。

The network frames sent out by the guest operating system are received by VirtualBox’s NAT engine, which extracts the TCP/IP data, and resends it using the host operating system. To an application on the host, or to another computer on the same network as the host, it looks like the data was sent by the VirtualBox application on the host, using an IP address belonging to the host. VirtualBox listens for replies to the packages sent, and repacks and resends them to the guest machine on its private network.

客户机(即虚拟机)送出的网络帧被 VirtualBox 的 NAT 引擎收到,抽取 TCP/IP 数据,再通过主机的操作系统(即安装 VirtualBox 的操作系统)重新发送出去。送到在主机上的一个应用程序,或者到位于主机同一网络的另一台计算机上,它看起来好象是安装在主机上的程序 VirtualBox,通过一个属于主机的 IP 地址,把数据发送出去。VirtualBox 倾听到数据包裹的回复,通过客户机的私人网络重新包装和发送往客户机上。

You can set up a guest service which you wish to proxy using the command line tool VBoxManage. You will need to know which ports on the guest the service uses and to decide which ports to use on the host (often but not always you will want to se the same ports on the guest and on the host). You can use any ports on the host which are not already in use by a service. An example of how to set up incoming NAT connections to a ssh server on the guest requires the following three commands:

你可以设置一个虚拟机的服务(比如 WEB 服务),通过使用命令行工具 VboxManage 代理。你需要知道虚拟机的服务使用哪个端口,然后决定在主机上使用哪个端口(通常但不总是想要使虚拟机和主机使用同一个端口)。在主机上提供一个服务需要使用一个端口,你能使用在主机上没有准备用来提供服务的任何端口。一个怎样设置新的 NAT 例子,在虚拟机上连接到一个 ssh 服务器,需要下面的三个命令:

VBoxManage setextradata "Linux Guest" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/Protocol" TCP

VBoxManage setextradata "Linux Guest" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/GuestPort" 22

VBoxManage setextradata "Linux Guest" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/HostPort" 2222

说明:VboxManage 是一个命令行程序,请查询你的 VirtualBox 安装目录,"Linux Guest" 是虚拟主机名。guestssh 是一个自定义的名称,你可以任意设置,通过上面的三个命令,把虚拟机的 22 端口 转发到主机的 2222 端口。

又比如,我在虚拟机 debian 上安装了 apache2 服务器,使用 80 端口,映射到主机的 80 端口。使用下面的命令。

"C:\Program Files\innotek VirtualBox\VBoxManage.exe" setextradata "debian" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/huzhangsheng/Protocol" TCP

"C:\Program Files\innotek VirtualBox\VBoxManage.exe" setextradata "debian" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/huzhangsheng/GuestPort" 80

"C:\Program Files\innotek VirtualBox\VBoxManage.exe" setextradata "debian" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/huzhangsheng/HostPort" 80

注意:要使设置生效,请关掉 VirtualBox 再运行虚拟机,我把 VirtualBox 安装在 winxp 上,在虚拟机中安装 debian 4.02r ,虚拟机名是 debian ,并安装了 apache2 php5 mysql-server ,在主机上用IE浏览 http://localhost,成功转发到虚拟机 debian 的 apache2 web 服务器上,通过这点我发现:可能 VirtualBox 的设置比 vmware 更灵活,更强大。

2009年12月3日星期四

Linux 用户(user)和用户组(group)管理概述

摘要:本文主要讲述在Linux 系统中用户(user)和用户组(group)管理相应的概念;用户(user)和用户组(group)相关命令的列举;其中也对单用户多任务,多用户多任务也做以解说,本文应该说是比较基础的文档;

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
正文
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++

一、理解Linux的单用户多任务,多用户多任务概念;

Linux 是一个多用户、多任务的操作系统;我们应该了解单用户多任务和多用户多任务的概念;

1、Linux 的单用户多任务;

单用户多任务;比如我们以beinan 登录系统,进入系统后,我要打开gedit 来写文档,但在写文档的过程中,我感觉少点音乐,所以又打开xmms 来点音乐;当然听点音乐还不行,MSN 还得打开,想知道几个弟兄现在正在做什么,这样一样,我在用beinan 用户登录时,执行了gedit 、xmms以及msn等,当然还有输入法fcitx ;这样说来就有点简单了,一个beinan用户,为了完成工作,执行了几个任务;当然beinan这个用户,其它的人还能以远程登录过来,也能做其它的工作。

2、Linux 的多用户、多任务;

有时可能是很多用户同时用同一个系统,但并不所有的用户都一定都要做同一件事,所以这就有多用户多任务之说;

举个例子,比如LinuxSir.Org 服务器,上面有FTP 用户、系统管理员、web 用户、常规普通用户等,在同一时刻,可能有的弟兄正在访问论坛;有的可能在上传软件包管理子站,比如luma 或Yuking 兄在管理他们的主页系统和FTP ;在与此同时,可能还会有系统管理员在维护系统;浏览主页的用的是nobody 用户,大家都用同一个,而上传软件包用的是FTP用户;管理员的对系统的维护或查看,可能用的是普通帐号或超级权限root帐号;不同用户所具有的权限也不同,要完成不同的任务得需要不同的用户,也可以说不同的用户,可能完成的工作也不一样;

值得注意的是:多用户多任务并不是大家同时挤到一接在一台机器的的键盘和显示器前来操作机器,多用户可能通过远程登录来进行,比如对服务器的远程控制,只要有用户权限任何人都是可以上去操作或访问的;

3、用户的角色区分;

用户在系统中是分角色的,在Linux 系统中,由于角色不同,权限和所完成的任务也不同;值得注意的是用户的角色是通过UID和识别的,特别是UID;在系统管理中,系统管理员一定要坚守UID 唯一的特性;

root 用户:系统唯一,是真实的,可以登录系统,可以操作系统任何文件和命令,拥有最高权限;
虚拟用户:这类用户也被称之为伪用户或假用户,与真实用户区分开来,这类用户不具有登录系统的能力,但却是系统运行不可缺少的用户,比如bin、daemon、adm、ftp、mail等;这类用户都系统自身拥有的,而非后来添加的,当然我们也可以添加虚拟用户;
普通真实用户:这类用户能登录系统,但只能操作自己家目录的内容;权限有限;这类用户都是系统管理员自行添加的;

4、多用户操作系统的安全;

多用户系统从事实来说对系统管理更为方便。从安全角度来说,多用户管理的系统更为安全,比如beinan用户下的某个文件不想让其它用户看到,只是设置一下文件的权限,只有beinan一个用户可读可写可编辑就行了,这样一来只有beinan一个用户可以对其私有文件进行操作,Linux 在多用户下表现最佳,Linux能很好的保护每个用户的安全,但我们也得学会Linux 才是,再安全的系统,如果没有安全意识的管理员或管理技术,这样的系统也不是安全的。

从服务器角度来说,多用户的下的系统安全性也是最为重要的,我们常用的Windows 操作系统,它在系纺权限管理的能力只能说是一般般,根本没有没有办法和Linux或Unix 类系统相比;

二、用户(user)和用户组(group)概念;

1、用户(user)的概念;

通过前面对Linux 多用户的理解,我们明白Linux 是真正意义上的多用户操作系统,所以我们能在Linux系统中建若干用户(user)。比如我们的同事想用我的计算机,但我不想让他用我的用户名登录,因为我的用户名下有不想让别人看到的资料和信息(也就是隐私内容)这时我就可以给他建一个新的用户名,让他用我所开的用户名去折腾,这从计算机安全角度来说是符合操作规则的;

当然用户(user)的概念理解还不仅仅于此,在Linux系统中还有一些用户是用来完成特定任务的,比如nobody和ftp 等,我们访问LinuxSir.Org 的网页程序,就是nobody用户;我们匿名访问ftp 时,会用到用户ftp或nobody ;如果您想了解Linux系统的一些帐号,请查看 /etc/passwd ;

2、用户组(group)的概念;

用户组(group)就是具有相同特征的用户(user)的集合体;比如有时我们要让多个用户具有相同的权限,比如查看、修改某一文件或执行某个命令,这时我们需要用户组,我们把用户都定义到同一用户组,我们通过修改文件或目录的权限,让用户组具有一定的操作权限,这样用户组下的用户对该文件或目录都具有相同的权限,这是我们通过定义组和修改文件的权限来实现的;

举例:我们为了让一些用户有权限查看某一文档,比如是一个时间表,而编写时间表的人要具有读写执行的权限,我们想让一些用户知道这个时间表的内容,而不让他们修改,所以我们可以把这些用户都划到一个组,然后来修改这个文件的权限,让用户组可读,这样用户组下面的每个用户都是可读的;

用户和用户组的对应关系是:一对一、多对一、一对多或多对多;

一对一:某个用户可以是某个组的唯一成员;
多对一:多个用户可以是某个唯一的组的成员,不归属其它用户组;比如beinan和linuxsir两个用户只归属于beinan用户组;
一对多:某个用户可以是多个用户组的成员;比如beinan可以是root组成员,也可以是linuxsir用户组成员,还可以是adm用户组成员;
多对多:多个用户对应多个用户组,并且几个用户可以是归属相同的组;其实多对多的关系是前面三条的扩展;理解了上面的三条,这条也能理解;

三、用户(user)和用户组(group)相关的配置文件、命令或目录;

1、与用户(user)和用户组(group)相关的配置文件;

1)与用户(user)相关的配置文件;

/etc/passwd 注:用户(user)的配置文件;
/etc/shadow 注:用户(user)影子口令文件;

2)与用户组(group)相关的配置文件;

/etc/group 注:用户组(group)配置文件;
/etc/gshadow 注:用户组(group)的影子文件;

2、管理用户(user)和用户组(group)的相关工具或命令;

1)管理用户(user)的工具或命令;

useradd 注:添加用户
adduser 注:添加用户
passwd 注:为用户设置密码
usermod 注:修改用户命令,可以通过usermod 来修改登录名、用户的家目录等等;
pwcov 注:同步用户从/etc/passwd 到/etc/shadow
pwck 注:pwck是校验用户配置文件/etc/passwd 和/etc/shadow 文件内容是否合法或完整;
pwunconv 注:是pwcov 的立逆向操作,是从/etc/shadow和 /etc/passwd 创建/etc/passwd ,然后会删除 /etc/shadow 文件;
finger 注:查看用户信息工具
id 注:查看用户的UID、GID及所归属的用户组
chfn 注:更改用户信息工具
su 注:用户切换工具
sudo 注:sudo 是通过另一个用户来执行命令(execute a command as another user),su 是用来切换用户,然后通过切换到的用户来完成相应的任务,但sudo 能后面直接执行命令,比如sudo 不需要root 密码就可以执行root 赋与的执行只有root才能执行相应的命令;但得通过visudo 来编辑/etc/sudoers来实现;
visudo 注:visodo 是编辑 /etc/sudoers 的命令;也可以不用这个命令,直接用vi 来编辑 /etc/sudoers 的效果是一样的;
sudoedit 注:和sudo 功能差不多;

2)管理用户组(group)的工具或命令;

groupadd 注:添加用户组;
groupdel 注:删除用户组;
groupmod 注:修改用户组信息
groups 注:显示用户所属的用户组
grpck
grpconv 注:通过/etc/group和/etc/gshadow 的文件内容来同步或创建/etc/gshadow ,如果/etc/gshadow 不存在则创建;
grpunconv 注:通过/etc/group 和/etc/gshadow 文件内容来同步或创建/etc/group ,然后删除gshadow文件;

3、/etc/skel 目录;

/etc/skel目录一般是存放用户启动文件的目录,这个目录是由root权限控制,当我们添加用户时,这个目录下的文件自动复制到新添加的用户的家目录下;/etc/skel 目录下的文件都是隐藏文件,也就是类似.file格式的;我们可通过修改、添加、删除/etc/skel目录下的文件,来为用户提供一个统一、标准的、默认的用户环境;

[[email protected] beinan]# ls -la /etc/skel/
总用量 92
drwxr-xr-x 3 root root 4096 8月 11 23:32 .
drwxr-xr-x 115 root root 12288 10月 14 13:44 ..
-rw-r--r-- 1 root root 24 5月 11 00:15 .bash_logout
-rw-r--r-- 1 root root 191 5月 11 00:15 .bash_profile
-rw-r--r-- 1 root root 124 5月 11 00:15 .bashrc
-rw-r--r-- 1 root root 5619 2005-03-08 .canna
-rw-r--r-- 1 root root 438 5月 18 15:23 .emacs
-rw-r--r-- 1 root root 120 5月 23 05:18 .gtkrc
drwxr-xr-x 3 root root 4096 8月 11 23:16 .kde
-rw-r--r-- 1 root root 658 2005-01-17 .zshrc

/etc/skel 目录下的文件,一般是我们用useradd 和adduser 命令添加用户(user)时,系统自动复制到新添加用户(user)的家目录下;如果我们通过修改 /etc/passwd 来添加用户时,我们可以自己创建用户的家目录,然后把/etc/skel 下的文件复制到用户的家目录下,然后要用chown 来改变新用户家目录的属主;

4、/etc/login.defs 配置文件;

/etc/login.defs 文件是当创建用户时的一些规划,比如创建用户时,是否需要家目录,UID和GID的范围;用户的期限等等,这个文件是可以通过root来定义的;

比如Fedora 的 /etc/logins.defs 文件内容;

# *REQUIRED*
# Directory where mailboxes reside, _or_ name of file, relative to the
# home directory. If you _do_ define both, MAIL_DIR takes precedence.
# QMAIL_DIR is for Qmail
#
#QMAIL_DIR Maildir
MAIL_DIR /var/spool/mail 注:创建用户时,要在目录/var/spool/mail中创建一个用户mail文件;
#MAIL_FILE .mail

# Password aging controls:
#
# PASS_MAX_DAYS Maximum number of days a password may be used.
# PASS_MIN_DAYS Minimum number of days allowed between password changes.
# PASS_MIN_LEN Minimum acceptable password length.
# PASS_WARN_AGE Number of days warning given before a password expires.
#
PASS_MAX_DAYS 99999 注:用户的密码不过期最多的天数;
PASS_MIN_DAYS 0 注:密码修改之间最小的天数;
PASS_MIN_LEN 5 注:密码最小长度;
PASS_WARN_AGE 7 注:

#
# Min/max values for automatic uid selection in useradd
#
UID_MIN 500 注:最小UID为500 ,也就是说添加用户时,UID 是从500开始的;
UID_MAX 60000 注:最大UID为60000;

#
# Min/max values for automatic gid selection in groupadd
#
GID_MIN 500 注:GID 是从500开始;
GID_MAX 60000

#
# If defined, this command is run when removing a user.
# It should remove any at/cron/print jobs etc. owned by
# the user to be removed (passed as the first argument).
#
#USERDEL_CMD /usr/sbin/userdel_local

#
# If useradd should create home directories for users by default
# On RH systems, we do. This option is ORed with the -m flag on
# useradd command line.
#
CREATE_HOME yes 注:是否创用户家目录,要求创建;

5、/etc/default/useradd 文件;

通过useradd 添加用户时的规则文件;
# useradd defaults file
GROUP=100
HOME=/home 注:把用户的家目录建在/home中;
INACTIVE=-1 注:是否启用帐号过期停权,-1表示不启用;
EXPIRE= 注:帐号终止日期,不设置表示不启用;
SHELL=/bin/bash 注:所用SHELL的类型;
SKEL=/etc/skel 注: 默认添加用户的目录默认文件存放位置;也就是说,当我们用adduser添加用户时,用户家目录下的文件,都是从这个目录中复制过去的;

后记:

关于用户(user)和用户组(group)管理内容大约就是这么多;只要把上面所说的内容了解和掌握,用户(user)和用户组(group)管理就差不多了;由于用户(user)和用户组(group)是和文件及目录权限联系在一起的,所以文件及目录权限的操作也会独立成文来给大家介绍;

本文只是让新手弟兄明白用户(user)和用户组(group)一些原理,所以我在写此文的时候,大多是解说内容,我的意思是通过解说和索引一些命令,让新手弟兄明白一点理论是比较重要的,技术操作无非是命令的用法;

本文内容会不断的更新和变动,一些命令需要独立成文加以解说,我会在最近的几天内完成;

参考文档:

Linux 相关的man 和相关的help ;

致谢:

对于本文,pandonny 兄也有贡献,谢谢;

相关文档:

《Linux 用户(user)和用户组(group)管理概述》
《用户(user)和用户组(group)配置文件详解》
《Linux 用户(User)查询篇》
《Linux 用户管理工具介绍》
《Linux 系统中的超级权限的控制》
《在Linux系统中,批量添加用户的操作流程》

想改變logo範圍 的大小

從佈景主題 HTML 編輯器可看到 logo 部分的程式碼 /templates/ja_purity/index.php
<?php
$siteName = $tmpTools->sitename();
if ($tmpTools->getParam('logoType')=='image'): ?>
<h1>
<a href="index.php" title="<?php echo $siteName; ?>"><span><?php echo $siteName; ?></span></a>
</h1>

查一下 template.css
可看到
h1.logo a {
width: 208px;
display: block;
background: url(../images/logo.png) no-repeat;
height: 80px;
position: relative;
z-index: 100;
}

修改其中的 width, height 即可

PHPBB-分区和版面不显示的问题

装了个PHPBB3,在后台设置好分区后发现前台只显示了2个分区。
讶异的Google了下,似乎很少有人碰到这个问题……好奇怪…
于是怀疑是皮肤文件的限制,但是换了个皮肤还是这样子。
只好继续找,还真给找到了,原来是权限设置的问题!
PHPBB里边所有的分区跟板块都必须设置权限后才会在前台显示,
只设置分区权限或者只设置版面权限都是不行的,一定要都设置。
不知道怎么设置的话可以复制系统默认的那个分区。

改變您的論壇 logo

內容
• 開始
• 上傳您的 Logo
• 重新整理您的風格主題 (Theme)
• 重新調整您的 Logo 尺寸
• 結束 (完成)

開始
這篇文章旨在說明如何上傳一個新的 logo 到您的 phpBB3 論壇.

如果您之前沒有上傳檔案的經驗, 或者不知道如何使用 'FTP', 那麼請在繼續下文前, 先行研讀這篇教學.

上傳您的 Logo
哈! 您真的想要用您自己的 logo 取代 phpBB 的嗎?

首先, 您必須找到您自己 (創建) 的 logo, 以及將它 (重新) 命名為 'site_logo.gif'. 注意! 檔案名稱完全是小寫字體.

接著, 使用您的 FTP 用戶端軟體, 將它上傳到: phpBB3/styles/prosilver/imageset/ (*) 底下,
覆蓋現存的 logo.

您已經成功地上傳您的 logo.

(*) 假設您使用的是 prosilver 風格.

重新整理您的風格主題 (Theme)
因為您已經使用新的 logo 覆蓋現存的檔案系統, 所以必須要重新整理以顯示出您的改變.

登入您的論壇, 並且進入管理員控制台 (ACP).
從這裡, 點選 '風格' 標籤, 選擇 '主題', 再點選您所使用的風格之 '重新整理'.

您的新 logo 現在應該可以顯示在論壇上. 如果還不行, 那麼試試重新整理您的瀏覽器.

重新調整您的 Logo 尺寸
您的新 logo 是否破壞了版面 (歪了)? 這是因為您的新 logo 有不一樣尺寸.

回到 ACP -> 風格 -> 圖檔組.
再點選您所使用的風格之 '編輯'.
在下拉式選單中, 選擇 '主要的 logo', 以及按下 '選擇' 鈕.
在下方, 您將看見圖檔寬度以及高度的選項.
輸入正確的 logo 尺寸後, 按送出.

結束 (完成)
您現在應該都做完了.

如果這 logo 還是沒有正確地顯示, 那麼請再試著重新整理您的風格.

http://phpbb-tw.net/phpbb/viewtopic.php?f=176&t=51343

MySQL Commands

This is a list of handy MySQL commands that I use time and time again. At the bottom are statements, clauses, and functions you can use in MySQL. Below that are PHP and Perl API functions you can use to interface with MySQL. To use those you will need to build PHP with MySQL functionality. To use MySQL with Perl you will need to use the Perl modules DBI and DBD::mysql.

Below when you see # it means from the unix shell. When you see mysql> it means from a MySQL prompt after logging into MySQL.
To login (from unix shell) use -h only if needed.

# [mysql dir]/bin/mysql -h hostname -u root -p
Create a database on the sql server.

mysql> create database [databasename];
List all databases on the sql server.

mysql> show databases;
Switch to a database.

mysql> use [db name];
To see all the tables in the db.

mysql> show tables;
To see database's field formats.

mysql> describe [table name];
To delete a db.

mysql> drop database [database name];
To delete a table.

mysql> drop table [table name];
Show all data in a table.

mysql> SELECT * FROM [table name];
Returns the columns and column information pertaining to the designated table.

mysql> show columns from [table name];
Show certain selected rows with the value "whatever".

mysql> SELECT * FROM [table name] WHERE [field name] = "whatever";
Show all records containing the name "Bob" AND the phone number '3444444'.

mysql> SELECT * FROM [table name] WHERE name = "Bob" AND phone_number = '3444444';
Show all records not containing the name "Bob" AND the phone number '3444444' order by the phone_number field.

mysql> SELECT * FROM [table name] WHERE name != "Bob" AND phone_number = '3444444' order by phone_number;
Show all records starting with the letters 'bob' AND the phone number '3444444'.

mysql> SELECT * FROM [table name] WHERE name like "Bob%" AND phone_number = '3444444';
Show all records starting with the letters 'bob' AND the phone number '3444444' limit to records 1 through 5.

mysql> SELECT * FROM [table name] WHERE name like "Bob%" AND phone_number = '3444444' limit 1,5;
Use a regular expression to find records. Use "REGEXP BINARY" to force case-sensitivity. This finds any record beginning with a.

mysql> SELECT * FROM [table name] WHERE rec RLIKE "^a";
Show unique records.

mysql> SELECT DISTINCT [column name] FROM [table name];
Show selected records sorted in an ascending (asc) or descending (desc).

mysql> SELECT [col1],[col2] FROM [table name] ORDER BY [col2] DESC;
Return number of rows.

mysql> SELECT COUNT(*) FROM [table name];
Sum column.

mysql> SELECT SUM(*) FROM [table name];
Join tables on common columns.

mysql> select lookup.illustrationid, lookup.personid,person.birthday from lookup left join person on lookup.personid=person.personid=statement to join birthday in person table with primary illustration id;
Creating a new user. Login as root. Switch to the MySQL db. Make the user. Update privs.

# mysql -u root -p
mysql> use mysql;
mysql> INSERT INTO user (Host,User,Password) VALUES('%','username',PASSWORD('password'));
mysql> flush privileges;
Change a users password from unix shell.

# [mysql dir]/bin/mysqladmin -u username -h hostname.blah.org -p password 'new-password'
Change a users password from MySQL prompt. Login as root. Set the password. Update privs.

# mysql -u root -p
mysql> SET PASSWORD FOR 'user'@'hostname' = PASSWORD('passwordhere');
mysql> flush privileges;
Recover a MySQL root password. Stop the MySQL server process. Start again with no grant tables. Login to MySQL as root. Set new password. Exit MySQL and restart MySQL server.

# /etc/init.d/mysql stop
# mysqld_safe --skip-grant-tables &
# mysql -u root
mysql> use mysql;
mysql> update user set password=PASSWORD("newrootpassword") where User='root';
mysql> flush privileges;
mysql> quit
# /etc/init.d/mysql stop
# /etc/init.d/mysql start
Set a root password if there is on root password.

# mysqladmin -u root password newpassword
Update a root password.

# mysqladmin -u root -p oldpassword newpassword
Allow the user "bob" to connect to the server from localhost using the password "passwd". Login as root. Switch to the MySQL db. Give privs. Update privs.

# mysql -u root -p
mysql> use mysql;
mysql> grant usage on *.* to [email protected] identified by 'passwd';
mysql> flush privileges;
Give user privilages for a db. Login as root. Switch to the MySQL db. Grant privs. Update privs.

# mysql -u root -p
mysql> use mysql;
mysql> INSERT INTO user (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,Create_priv,Drop_priv) VALUES ('%','databasename','username','Y','Y','Y','Y','Y','N');
mysql> flush privileges;

or

mysql> grant all privileges on databasename.* to [email protected];
mysql> flush privileges;
To update info already in a table.

mysql> UPDATE [table name] SET Select_priv = 'Y',Insert_priv = 'Y',Update_priv = 'Y' where [field name] = 'user';
Delete a row(s) from a table.

mysql> DELETE from [table name] where [field name] = 'whatever';
Update database permissions/privilages.

mysql> flush privileges;
Delete a column.

mysql> alter table [table name] drop column [column name];
Add a new column to db.

mysql> alter table [table name] add column [new column name] varchar (20);
Change column name.

mysql> alter table [table name] change [old column name] [new column name] varchar (50);
Make a unique column so you get no dupes.

mysql> alter table [table name] add unique ([column name]);
Make a column bigger.

mysql> alter table [table name] modify [column name] VARCHAR(3);
Delete unique from table.

mysql> alter table [table name] drop index [colmn name];
Load a CSV file into a table.

mysql> LOAD DATA INFILE '/tmp/filename.csv' replace INTO TABLE [table name] FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' (field1,field2,field3);
Dump all databases for backup. Backup file is sql commands to recreate all db's.

# [mysql dir]/bin/mysqldump -u root -ppassword --opt >/tmp/alldatabases.sql
Dump one database for backup.

# [mysql dir]/bin/mysqldump -u username -ppassword --databases databasename >/tmp/databasename.sql
Dump a table from a database.

# [mysql dir]/bin/mysqldump -c -u username -ppassword databasename tablename > /tmp/databasename.tablename.sql
Restore database (or database table) from backup.

# [mysql dir]/bin/mysql -u username -ppassword databasename < /tmp/databasename.sql
Create Table Example 1.

mysql> CREATE TABLE [table name] (firstname VARCHAR(20), middleinitial VARCHAR(3), lastname VARCHAR(35),suffix VARCHAR(3),officeid VARCHAR(10),userid VARCHAR(15),username VARCHAR(8),email VARCHAR(35),phone VARCHAR(25), groups VARCHAR(15),datestamp DATE,timestamp time,pgpemail VARCHAR(255));
Create Table Example 2.

mysql> create table [table name] (personid int(50) not null auto_increment primary key,firstname varchar(35),middlename varchar(50),lastnamevarchar(50) default 'bato');

MYSQL Statements and clauses

ALTER DATABASE

ALTER TABLE

ALTER VIEW

ANALYZE TABLE

BACKUP TABLE

CACHE INDEX

CHANGE MASTER TO

CHECK TABLE

CHECKSUM TABLE

COMMIT

CREATE DATABASE

CREATE INDEX

CREATE TABLE

CREATE VIEW

DELETE

DESCRIBE

DO

DROP DATABASE

DROP INDEX

DROP TABLE

DROP USER

DROP VIEW

EXPLAIN

FLUSH

GRANT

HANDLER

INSERT

JOIN

KILL

LOAD DATA FROM MASTER

LOAD DATA INFILE

LOAD INDEX INTO CACHE

LOAD TABLE...FROM MASTER

LOCK TABLES

OPTIMIZE TABLE

PURGE MASTER LOGS

RENAME TABLE

REPAIR TABLE

REPLACE

RESET

RESET MASTER

RESET SLAVE

RESTORE TABLE

REVOKE

ROLLBACK

ROLLBACK TO SAVEPOINT

SAVEPOINT

SELECT

SET

SET PASSWORD

SET SQL_LOG_BIN

SET TRANSACTION

SHOW BINLOG EVENTS

SHOW CHARACTER SET

SHOW COLLATION

SHOW COLUMNS

SHOW CREATE DATABASE

SHOW CREATE TABLE

SHOW CREATE VIEW

SHOW DATABASES

SHOW ENGINES

SHOW ERRORS

SHOW GRANTS

SHOW INDEX

SHOW INNODB STATUS

SHOW LOGS

SHOW MASTER LOGS

SHOW MASTER STATUS

SHOW PRIVILEGES

SHOW PROCESSLIST

SHOW SLAVE HOSTS

SHOW SLAVE STATUS

SHOW STATUS

SHOW TABLE STATUS

SHOW TABLES

SHOW VARIABLES

SHOW WARNINGS

START SLAVE

START TRANSACTION

STOP SLAVE

TRUNCATE TABLE

UNION

UNLOCK TABLES

USE

String Functions

AES_DECRYPT

AES_ENCRYPT

ASCII

BIN

BINARY

BIT_LENGTH

CHAR

CHAR_LENGTH

CHARACTER_LENGTH

COMPRESS

CONCAT

CONCAT_WS

CONV

DECODE

DES_DECRYPT

DES_ENCRYPT

ELT

ENCODE

ENCRYPT

EXPORT_SET

FIELD

FIND_IN_SET

HEX

INET_ATON

INET_NTOA

INSERT

INSTR

LCASE

LEFT

LENGTH

LOAD_FILE

LOCATE

LOWER

LPAD

LTRIM

MAKE_SET

MATCH AGAINST

MD5

MID

OCT

OCTET_LENGTH

OLD_PASSWORD

ORD

PASSWORD

POSITION

QUOTE

REPEAT

REPLACE

REVERSE

RIGHT

RPAD

RTRIM

SHA

SHA1

SOUNDEX

SPACE

STRCMP

SUBSTRING

SUBSTRING_INDEX

TRIM

UCASE

UNCOMPRESS

UNCOMPRESSED_LENGTH

UNHEX

UPPER

Date and Time Functions

ADDDATE

ADDTIME

CONVERT_TZ

CURDATE

CURRENT_DATE

CURRENT_TIME

CURRENT_TIMESTAMP

CURTIME

DATE

DATE_ADD

DATE_FORMAT

DATE_SUB

DATEDIFF

DAY

DAYNAME

DAYOFMONTH

DAYOFWEEK

DAYOFYEAR

EXTRACT

FROM_DAYS

FROM_UNIXTIME

GET_FORMAT

HOUR

LAST_DAY

LOCALTIME

LOCALTIMESTAMP

MAKEDATE

MAKETIME

MICROSECOND

MINUTE

MONTH

MONTHNAME

NOW

PERIOD_ADD

PERIOD_DIFF

QUARTER

SEC_TO_TIME

SECOND

STR_TO_DATE

SUBDATE

SUBTIME

SYSDATE

TIME

TIMEDIFF

TIMESTAMP

TIMESTAMPDIFF

TIMESTAMPADD

TIME_FORMAT

TIME_TO_SEC

TO_DAYS

UNIX_TIMESTAMP

UTC_DATE

UTC_TIME

UTC_TIMESTAMP

WEEK

WEEKDAY

WEEKOFYEAR

YEAR

YEARWEEK

Mathematical and Aggregate Functions

ABS

ACOS

ASIN

ATAN

ATAN2

AVG

BIT_AND

BIT_OR

BIT_XOR

CEIL

CEILING

COS

COT

COUNT

CRC32

DEGREES

EXP

FLOOR

FORMAT

GREATEST

GROUP_CONCAT

LEAST

LN

LOG

LOG2

LOG10

MAX

MIN

MOD

PI

POW

POWER

RADIANS

RAND

ROUND

SIGN

SIN

SQRT

STD

STDDEV

SUM

TAN

TRUNCATE

VARIANCE

Flow Control Functions

CASE

IF

IFNULL

NULLIF

Command-Line Utilities

comp_err

isamchk

make_binary_distribution

msql2mysql

my_print_defaults

myisamchk

myisamlog

myisampack

mysqlaccess

mysqladmin

mysqlbinlog

mysqlbug

mysqlcheck

mysqldump

mysqldumpslow

mysqlhotcopy

mysqlimport

mysqlshow

perror

Perl API - using functions and methods built into the Perl DBI with MySQL

available_drivers

begin_work

bind_col

bind_columns

bind_param

bind_param_array

bind_param_inout

can

clone

column_info

commit

connect

connect_cached

data_sources

disconnect

do

dump_results

err

errstr

execute

execute_array

execute_for_fetch

fetch

fetchall_arrayref

fetchall_hashref

fetchrow_array

fetchrow_arrayref

fetchrow_hashref

finish

foreign_key_info

func

get_info

installed_versions

last_insert_id

looks_like_number

neat

neat_list

parse_dsn

parse_trace_flag

parse_trace_flags

ping

prepare

prepare_cached

primary_key

primary_key_info

quote

quote_identifier

rollback

rows

selectall_arrayref

selectall_hashref

selectcol_arrayref

selectrow_array

selectrow_arrayref

selectrow_hashref

set_err

state

table_info

table_info_all

tables

trace

trace_msg

type_info

type_info_all

Attributes for Handles

PHP API - using functions built into PHP with MySQL

mysql_affected_rows

mysql_change_user

mysql_client_encoding

mysql_close

mysql_connect

mysql_create_db

mysql_data_seek

mysql_db_name

mysql_db_query

mysql_drop_db

mysql_errno

mysql_error

mysql_escape_string

mysql_fetch_array

mysql_fetch_assoc

mysql_fetch_field

mysql_fetch_lengths

mysql_fetch_object

mysql_fetch_row

mysql_field_flags

mysql_field_len

mysql_field_name

mysql_field_seek

mysql_field_table

mysql_field_type

mysql_free_result

mysql_get_client_info

mysql_get_host_info

mysql_get_proto_info

mysql_get_server_info

mysql_info

mysql_insert_id

mysql_list_dbs

mysql_list_fields

mysql_list_processes

mysql_list_tables

mysql_num_fields

mysql_num_rows

mysql_pconnect

mysql_ping

mysql_query

mysql_real_escape_string

mysql_result

mysql_select_db

mysql_stat

mysql_tablename

mysql_thread_id

mysql_unbuffered_query

http://www.pantz.org/software/mysql/mysqlcommands.html

百度面试题--linux[转]

这篇文章是转载的哦,大家都误会了~我不是东软的
=========================================

1.假设Apache产生的日志文件名为access_log,在apache正在运行时,执行命令mv
access_log access_log.bak,执行完后,请问新的apache的日志会打印到哪里,为什么?
1、新的日志会打印在access_log.bak中,因为apache启动时会找到access_log文件,随时准备向文件中加入日志信息,
虽然此时文件被改名,但是由于服务正在运行,因为它的inode节点的位置没有变,程序打开的fd仍然会指向原来那个inode,
不会因为文件名的改变而改变。apache会继续向已改名的文件中追加日志,但是若重启apache服务,系统会检查access_log
文件是否存在,若不存在则创建。
2.在Shell环境下,如何查看远程Linux系统运行了多少时间?
2、监控主机执行: ssh [email protected]被监控主机ip "uptime"
这样得到了被监控主机的uptime
3.处理以下文件内容,将域名取出并进行计数排序,如处理:
http://www.baidu.com/index.html
http://www.baidu.com/1.html
http://post.baidu.com/index.html
http://mp3.baidu.com/index.html
http://www.baidu.com/3.html
http://post.baidu.com/2.html
得到如下结果:
域名的出现的次数 域名
3 www.baidu.com
2 post.baidu.com
1 mp3.baidu.com
可以使用bash/perl/php/c任意一种
3、[[email protected] shell]# cat file | sed -e ' s/http:\/\///' -e ' s/\/.*//' | sort | uniq -c | sort -rn
3 www.baidu.com
2 post.baidu.com
1 mp3.baidu.com
[[email protected] shell]# awk -F/ '{print $3}' file |sort -r|uniq -c|awk '{print $1"\t",$2}'
3 www.baidu.com
2 post.baidu.com
1 mp3.baidu.com

4.如果得到随机的字串,长度和字串中出现的字符表可定义,并将字串倒序显示,如
把0123456789作为基准的字串字符表,产生一个6位的字串642031,打印出的字串为
130246,可使用bash/perl/php/c任意一种.
4、[[email protected] ~]# awk -v count=6 'BEGIN {srand();str="0123456789";len=length(str);for(i=count;i>0;i--) marry[i]=substr(str,int(rand()*len),1);for(i=count;i>0;i--) printf("%c",marry[i]);printf("\n");for

(i=0;i<=count;i++) printf("%c",marry[i]);printf("\n")}'
838705
507838
5.如何查看当前Linux系统的状态,如CPU使用,内存使用,负载情况等.
5、Linux系统中“/proc”是个伪文件目录,不占用系统空间,及时的反应出内存现在使用的进程情况......其中许多文件都保存系统运行状态和相关信息
对于“/proc”中文件可使用文件查看命令浏览其内容,文件中包含系统特定信息:
cpuinfo 主机CPU信息
filesystems 文件系统信息
meninfo 主机内存信息
version Linux内存版本信息
diskstatus 磁盘负载情况
另外top命令可以动态的显示当前系统进程用户的使用情况,而且是动态的显示出来,尤其是在该命令显示出来的对上方对系统的情况进行汇总.
free命令呢可以查看真实使用的内存 一般用free -m
使用lsof 、ps -aux 可以查看详细的每个进程的使用状况
dmesg 也是常用来查看系统性能的命令
#######################################################################################################################################################################
#题目:有10台被监控主机、一台监控机,在监控机上编写脚本,一旦某台被监控机器/分区适用率大于80%, 就发邮件报警放到crontab里面, 每10分钟检查一次
#测试机器:虚拟机Linux as 4
#1.首先建立服务器间的信任关系。拿两台机器做测试
本机ip:192.168.1.6
[[email protected] ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y (以为我是第2次建立关系所以此处覆盖原来的文件)
Enter passphrase (empty for no passphrase):(直接回车无须输入密钥)
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
04:37:13:2a:4b:10:af:c1:2b:03:3f:6b:27:ce:b9:62 [email protected]
[[email protected] ~]# cd .ssh/
[[email protected] .ssh]# ll
-rw------- 1 root root 883 Apr 25 17:51 id_rsa
-rw-r--r-- 1 root root 221 Apr 25 17:51 id_rsa.pub
-rw-r--r-- 1 root root 442 Apr 25 17:37 known_hosts
id_rsa是密钥文件,id_rsa.pub是公钥文件。
[[email protected] .ssh]# scp id_rsa.pub192.168.1.4:/root/.ssh/192.168.1.6
[email protected]'s password:
id_rsa.pub 100% 221 0.2KB/s 00:00
这里把公钥文件取名为本机的ip地址就是为了以后和更多的机器建立信任关系不发生混淆。
现在登陆到192.168.1.4机器
[[email protected] ~]# cd .ssh/
[[email protected] .ssh]# cat 192.168.1.6 >> authorized_keys
然后回到192.168.1.6机器直接
[[email protected] .ssh]# ssh 192.168.1.4
Last login: Wed Aug 8 12:14:42 2007 from 192.168.1.6
这样就可以了,里面偶尔涉及到权限问题。一般./ssh文件夹是755 authorized_keys为600或者644

####脚本如下#######################

#!/bin/bash
#SCRIPT:df_check.sh
#Writeen by codfei Mon Sep 3 07:25:28 CST 2007
#PURPOSE:This script is used to monitor for full filesystems.
#######################Begining########################################
FSMAX="80"
remote_user='root' #####完全可以不用root
remote_ip=(192.168.1.5 192.168.1.6 192.168.1.7 192.168.1.8 192.168.1.9 192.168.1.10 192.168.1.11 192.168.1.12 192.168.1.13 192.168.1.14 )
###这里填写你要监控的主机ip
ip_num='0'
while [ "$ip_num" -le "$(expr ${#remote_ip[@]} - 1)" ]
do
read_num='1'
ssh "$remote_user"@"${remote_ip[$ip_num]}" df -h > /tmp/diskcheck_tmp
grep '^/dev/*' /tmp/diskcheck_tmp|awk '{print $5}'|sed 's/\%//g' > /tmp/diskcheck_num_tmp
while [ "$read_num" -le $(wc -l < /tmp/diskcheck_num_tmp) ]
do
size=$(sed -n "$read_num"'p' /tmp/diskcheck_num_tmp)
if [ "$size" -gt "$FSMAX" ]
then
$(grep '^/dev/*' /tmp/diskcheck_tmp|sed -n $read_num'p' > /tmp/disk_check_mail)
$(echo ${remote_ip[$ip_num]} >> /tmp/disk_check_mail)
$(mail -s "diskcheck_alert" admin < /tmp/disk_check_mail)
fi
read_num=$(expr $read_num + 1)
done
ip_num=$(expr $ip_num + 1)
done

#############over################################
################让脚本每十分钟执行一次#############
在cron表中加入
0/10 * * * * /home/codfei/diskcheck.sh 2>&1

##########################################################################
比如, ext2文件系统, 如果异常死机,开机如何修复文件系统?
如果异常关机,比如断电,通知机房的人开机之后,
我们需要远程修复、检查文件系统
除了/分区之外, 其他的分区:
umount /home
fsck -y /home
/ 分区需要开机之后, 由机房的人来扫描
随后我们再登录并扫描/home等其他分区
如何查看一个进程所使用的文件句柄?
看这里面 /proc/进程号/fd/
的个数就行了
简单的比如如何查看apache进程数
[[email protected] fd]# ps -ef|grep httpd|wc -l
1
如何统计apache的每秒访问数?
tail access_log | awk '{print $1,$4}'
[[email protected] logs]# grep -c `date -d '3 second ago' +%T` access_log
0
#######################################################################################################################
#######################################################################################################################
1、/proc/sys 子目录的作用
该子目录的作用是报告各种不同的内核参数,并让您能交互地更改其中的某些。与 /proc 中所有其他文件不同,该目录中的某些文件可以写入,不过这仅针对 root。

其中的目录以及文件的详细列表将占据过多的篇幅,而且该目录的内容是依赖于系统的,而大部分的文件也仅仅对某些特殊的应用程序有用。然而,以下是该子目录的两个最常见的用途:

允许路由:即便是 Mandrakelinux 默认的内核也是允许路由的,您必需显式允许它这么做。为此,您只要以 root 身份键入以下命令:

$ echo 1 >/proc/sys/net/ipv4/ip_forward

如果您要禁用路由,请将上述命令中的 1 改为 0。

阻止 IP 欺骗:IP 欺骗会让人认为某个来自于外部的某个数据包是来自于它到达的那个接口。这一技术常被骇客(cracker)所使用。您可以让内核阻止这种入侵。请键入:

$ echo 1 >/proc/sys/net/ipv4/conf/all/rp_filter

这样,这种攻击就不再可能了。

这些改变仅当系统运行时有效。在系统重新启动之后,它们会改会它们的默认值。要在启动时就改动这些值,您可以将您在 shell 提示符后键入的命令添加到 /etc/rc.d/rc.local 中以免每次都键入它们。另一个方法是修改

/etc/sysctl.conf
2、将一个文本的奇数行和偶数行合并,第2行和第3行合并
[[email protected] bin]# cat 1
48 Oct 3bc1997 lpas 68.00 lvx2a 138
484 Jan 380sdf1 usp 78.00 deiv 344
483 nov 7pl1998 usp 37.00 kvm9d 644
320 aug der9393 psh 83.00 wiel 293
231 jul sdf9dsf sdfs 99.00 werl 223
230 nov 19dfd9d abd 87.00 sdiv 230
219 sept 5ap1996 usp 65.00 lvx2c 189
216 Sept 3zl1998 usp 86.00 kvm9e 234
[[email protected] bin]# sed '$!N;s/\n/ /g' 1
48 Oct 3bc1997 lpas 68.00 lvx2a 138 484 Jan 380sdf1 usp 78.00 deiv 344
483 nov 7pl1998 usp 37.00 kvm9d 644 320 aug der9393 psh 83.00 wiel 293
231 jul sdf9dsf sdfs 99.00 werl 223 230 nov 19dfd9d abd 87.00 sdiv 230
219 sept 5ap1996 usp 65.00 lvx2c 189 216 Sept 3zl1998 usp 86.00 kvm9e 234
[[email protected] bin]# sed -n -e 2p -e 3p 1|sed '$!N;s/\n/ /'
484 Jan 380sdf1 usp 78.00 deiv 344 483 nov 7pl1998 usp 37.00 kvm9d 644
3、read 命令5秒后自动退出
[[email protected] bin]# read -t 5
4、自动ftp上传
#!/bin/sh
ftp -n<<END_FTP
open 192.168.1.4
user codfei duibuqi //用户名codfei 密码duibuqi
binary
prompt off //关闭提示
mput test //上传test
close
bye
END_FTP
自动ssh登陆 从A到B然后再到c
#!/usr/bin/expect -f
set timeout 30
spawn ssh [email protected]
expect "password:"
send "pppppp\r"
expect "]*"
send "ssh [email protected]\r"
expect "password:"
send "pppppp\r"
interact

5、#打印第一个域
[[email protected] bin]# cat 3
eqeqedadasdD
eqeqdadfdfDD
fdsfdsfQWEDD
DSADASDSADSA
[[email protected] bin]#
[[email protected] bin]#
[[email protected] bin]# awk -F "" '{print $1}' 3
e
e
f
D
6、实现字符串翻转
[[email protected] bin]# cat 8
qweqewqedadaddas
[[email protected] bin]# rev 8
saddadadeqweqewq
########################################第2次电面
7、sed awk grep哪个最好
我答的是 哪个掌握的精通,都很好,但是还是问我哪个最好,我只能说awk了,对于行操作和列操作都可以操作的很好。
8、grep -E -P 是什么意思
我说的是-E, --extended-regexp 采用规则表示式去解释样式。 -P不太清楚
9、还问了我对运维这个工作的理解,和应该具备的素质。
…………

后记:百度面试的这些日子,很刻苦的学习了一个阶段。脚本编写能力有了很大的提高。但是还是很遗憾,没有去到那。打算在大连找个运维的工作,继续我的理想。工作几年之后,我还会继续努力的。
3年前,我是个连电脑开机关机都不会的农村孩子,现在我感觉自己已经对于Linux入门了,我很有成就感,但是我还是被百度通知,我没有发展潜力。我是东软的学生,我总是认为计算机这个行业不看
出身,只要有实力就可以了。但是第一次电面我没有被提前通知会电面,突然打来的电话,当时正在睡觉,头脑一片空白……进入到第2次面试的时候,没有慌张准备的很充分,但是还是没有最终被录取。
面试的考官问我,你最失落的事是什么,我说我最失落的事是我高考前意外受伤休学了,我现在才意识到,我当时应该回答他,我最失落的时刻应该是我得知被百度拒绝的那一刻。
已经付出了努力,即使失败也很坦然。

Git入门教程

考虑到CVS的一些局限性,最近和同事在公司推行Git。

其实,如果推行SVN的化,可能推行的难度会降低很多。不过lark说既然推行一个新的版本管理工具,总要花费一定的时间进行培训、部署、转换。而推行Git和SVN的代价不如想象中差距那么大。因此,不如就多花些精力推行Git , 可以带来更多的好处。 这个想法说服了我。 然后就开始筹备了。 我发现网上很多git教程对一些基础命令(比如git-reset)的介绍还是不够清楚。另外,介绍git1.5的少,介绍git1.4的多。此外,对于如何基于Git合作开发,介绍的内容也是少之又少。因此,决定写一份教程,以减少在公司推广Git的培训代价。

其实我也是一个Git的新手。 写这份教程也是我自己学习和摸索git的过程,其中基于Git进行合作开发的模式参考了CVS,应该是很初级的合作模式。但是当前自己也只能做到这一步了。 教程所述都是自己通过试验验证的。至少可以满足公司基本的合作开发。教程写完后,谢欣说可以放到blog与大家共享。我觉得是个不错的主意。一方面我觉得这个文档应该可以给git的新手一些帮助,另一方面也欢迎git的大牛指点。 这里要感谢《Git 中文教程》的作者。还有概述中关于git的优点描述拷贝了网络上某位大牛的原话,但是拷贝的出处也是转载的,就在这里谢谢那位我不知名大牛了。

下面就开始了。

1. 概述
对于软件版本管理工具,酷讯决定摒弃CVS而转向Git了。
为什么要选择Git? 你真正学会使用Git时, 你就会觉得这个问题的回答是非常自然的。然而当真正需要用文字来回答时,却觉得文字好像不是那么够用。 咳,该则么回答呢?
其实,关键的问题不在于如何回答这个问题。 问题的关键是公司已经决定使用它了。那么,我们的程序员们! 请开动你们的浏览器,请拿出你的搜索引擎工具,去自己发掘答案吧。在这里,我只能给你们一个最朦胧的感觉。
Git和 CVS、SVN不同,是一个分布式的源代码管理工具。Linux内核的代码就是用Git管理的。它很强,也很快。它给我们带来的直接好处有:
1. 傻瓜都会的初始化,git init, git commit -a, 就完了。对于随便写两行代码就要放到代码管理工具里的人来说,再合适不过。也可以拿git做备份系统,或者同步两台机器的文档,都很方便。
2. 绝大部分操作在本地完成,不用和集中的代码管理服务器交互,终于可以随时随地大胆地check in代码了。 只有最终完成的版本才需要向一个中心的集中的代码管理服务器提交。
3. 每次提交都会对所有代码创建一个唯一的commit id。不像CVS那样都是对单个文件分别进行版本的更改。所以你可以一次性将某次提交前的所有代码check出来,而不用考虑到底提交过那些文件。(其实SVN也可以做到这点)
4. branch管理容易多了,无论是建立新的branch,还是在branch之间切换都一条命令完成,不需要建立多余的目录。
5. branch之间merge时,不仅代码会merge在一起,check in历史也会保留,这点非常重要。
6. … 太多了
当然,Git也会带给我们一些困难,首先,你想要使用好git,就要真正明白它的原理,理解它的观念, 对以那些CVS的熟手来说,改变你已经固有的纯集中式源代码管理的观念尤为重要,同时也会让你觉得有些困难。在使用git的初期,你可能会觉得有些困难,但等你逐渐明白它时,你绝对会喜欢上它。这是一定的,就像我问你“喜欢一个温吞如水、毫无感觉的主妇,还是喜欢一个奔放如火,让你爱的痴狂恨的牙痒的情人”一样毋庸置疑。
下面,就让我们进入学习Git之旅…
请记住,这只是一个非常简单而且初级的教程, 想要成为git的专家,需要各位同事不断的自己深入挖掘。
2. Git基础命令

2.1 创建Git库—git-init
你们曾经创建过CVS的库么?应该很少有人操作过吧?因为很多人都是从CVS库里checkout代码。同样,在合作开发中,如果你不是一个代码模块的发起者,也不会使用到这个命令,更多的是使用git-clone(见2.7节)。 但是,如果你想个人开发一个小模块,并暂时用代码管理工具管理起来(其实我就常这么做,至少很多个人开发过程都可以保留下来,以便备份和恢复),创建一个Git库是很容易和方便的。
对于酷讯来说,当一个代码的Git库创建后,会添加代码文件到库里,并将这个库放到公司一个专门用来进行代码管理的服务器上,使大家可以在以后clone(不明白?没关系,继续往后看就明白了)它。对于个人来说,你可以随便将这个库放到哪里,只要你能访问的到就行。

创建一个Git库是很容易和方便的,只要用命令 git-init 就可以了。在Git1.4之前(包括git1.4)的版本,这个命令是git-init。
a) $ mkdir dir
b) $ cd dir
c) $ git-init
这样,一个空的版本库就创建好了,并在当前目录中创建一个叫 .git 的子目录。以后,所以的文件变化信息都会保存到这个目录下,而不像CVS那样,会在每个目录和子目录下都创建一个讨厌的CVS目录。
在.git目录下有一个config文件, 需要我们添加一下个人信息后才能使用。否则我们不能对其中添加和修改任何文件。
原始的config文件是这样的,
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
我们需要加入
[user]
name = xxx
emai= [email protected]
现在已经创建好了一个 git 版本库,但是它是空的,还不能做任何事情,下一步就是怎么向版本库中添加文件了。如果希望忽略某些文件,需要在git库根目录下添加. gitignore文件。
2.2 一条重要的命令 -- git-update-index
在介绍如何向git库中添加文件前,不得不先介绍git-update-index命令。这条命令可能会使很多熟悉CVS的用户疑惑, 一般来说,我们向一个源代码管理库提交代码的更改,都会抽象为以下的动作:更改文件;向源码管理系统标识变化;提交。比如从一个CVS库里删除一个文件,需要先删除文件,然后cvs delete; 最后cvs commit。
因此, git-update-index就是向源码管理系统标识文件变化的一个抽象操作。说的简要一些,git-update-index命令就是通知git库有文件的状态发生了变化(新添、修改、删除等待)。这条命令在早期的git版本中是非常常用的。 在新的git版本(1.5版本及以后)已经被其它命令包装起来,并且不推荐使用了。
git-update-index最常用的方式有以下两种,更多功能请man git-update-index。
 方法一:git-update-index --add 文件名列表。 如果文件存在,则这条命令是向git库标识该文件发生过变化(无论是否该文件确实被修改过),如果文件不存在,则这条命令是向git库表示需要加入一个新文件。
 方法二: git-update-index --force-remove 文件名列表。 这表示向git库表示哟啊从库中删除文件。无论该文件是否已经被删除,这条命令仅仅是通知git库要从库中删除这些文件。这些文件都不会受影响。
因此,git-update-index仅仅是向git库起到一个通知和标识的作用,并不会操作具体的文件。
2.3 向git库中添加或删除文件 – git-add、git-rm
其实,说使用git-add命令向git库里添加文件是不对的, 或者说至少是不全面的。git-add 命令的本质是命令"git-update-index --add” 的一个包装。因此,git-add除了可以添加文件,还可以标识文件修改。在调用了git-add后,才可以做commit操作。git-rm 也是一样, 它是git-update-index --force-remove的一个包装。
对于git-add来说, 如果在一个目录下调用了git-add * ,则默认是递归将子目录中所有文件都add到git库中。对于git-rm来说,也是一样。 这点和CVS有较大区别。
此外,我们还可以通过命令git-ls-files来查看当前的git库中有那些文件。

2.4 查看版本库状态—git-status
通过该命令,我们可以查看版本库的状态。可以得知那些文件发生了变化,那些文件还没有添加到git库中等等。 建议每次commit前都要通过该命令确认库状态。以避免误操作。
其总,最常见的误操作是, 修改了一个文件, 没有调用git-add通知git库该文件已经发生了变化就直接调用commit操作, 从而导致该文件并没有真正的提交。如果这时如果开发者以为已经提交了该文件,就继续修改甚至删除这个文件,那么修改的内容就没有通过版本管理起来。如果每次在提交前,使用git-status查看一下,就可以发现这种错误。因此,如果调用了git-status命令,一定要格外注意那些提示为“Changed but not updated:”的文件。 这些文件都是与上次commit相比发生了变化,但是却没有通过git-add标识的文件。
2.5 向版本库提交变化 – git-commit
直接调用git-commit命令,会提示填写注释。也可以通过如下方式在命令行就填写提交注释:git-commit -m "Initial commit of gittutor reposistory"。 注意,和CVS不同,git的提交注释必须不能为空。否则就会提交失败。
git-commit还有一个 –a的参数,可以将那些没有通过git-add标识的变化一并强行提交,但是不建议使用这种方式。
每一次提交,git就会为全局代码建立一个唯一的commit标识代码,用户可以通过git-revert命令恢复到任意一次提交时的代码。 这比CVS不同文件有不同的版本呢号管理可方便多了。(和SVN类似)
如果提交前,想看看具体那些文件发生变化,可以通过git-diff来查看, 不过这个命令的输出并不友好。因此建议用别的工具来实现该功能。在提交后,还可以通过git-log命令来查看提交记录。
2.6 分支管理 – git-branch
我们迎来了git最强大,也是比CVS、SVN强大的多的功能 — 分支管理。
大概每个程序员都会经常遇到这样的情况:
1. 需要立刻放下手头的工作,去修改曾经一个版本的bug并上线,然后再继续当的工作。
2. 本想向中心库commit一个重要修改,但是由于需要经常备份代码,最终不得不频繁的向中心库commit。从而导致大量无用的commit信息被保留在中心库中。
3. 将一次修改提交同事进行code review, 但是由于同事code review比较慢, 得到反馈时,自己的代码已经发生了变化,从而倒是合并异常困难
这些场景,如果用CVS或者SVN来解决,虽说不一定解决不了,但过程之繁琐,之复杂,肯定另所有人都有生不如死的感觉吧!究其关键,就是CVS或者SNV的branch管理太复杂,基本不具可用性。
在 git 版本库中创建分支的成本几乎为零,所以,不必吝啬多创建几个分支。当第一次执行git-init时,系统就会创建一个名为”master”的分支。 而其它分支则通过手工创建。下面列举一些常见的分支策略,这些策略相信会对你的日常开发带来很大的便利。
 1.创建一个属于自己的个人工作分支,以避免对主分支 master 造成太多的干扰,也方便与他人交流协作。
 2.当进行高风险的工作时,创建一个试验性的分支,扔掉一个烂摊子总比收拾一个烂摊子好得多。
 3.合并别人的工作的时候,最好是创建一个临时的分支用来合并,合并完成后在“fatch”到自己的分支(合并和fatch后面有讲述,不明白就继续往下看好了)
2.6.1 查看分支 – git-branch
调用git-branch可以查看程序中已经存在的分支和当前分支
2.6.2 创建分支 – git-branch 分支名
要创建一个分支,可以使用如下方法:
1. git-branch 分支名称
2. git-checout –b 分支名
使用第一种方法,虽然创建了分支,但是不会将当前工作分支切换到新创建的分支上,因此,还需要命令”git-checkout 分支名” 来切换, 而第二种方法不但创建了分支,还将当前工作分支切换到了该分支上。
另外,需要注意,分支名称是有可能出现重名的情况的, 比如说,我在master分支下创建了a和b两个分支, 然后切换到b分支,在b分支下又创建了a和c分支。 这种操作是可以进行的。 此时的a分支和master下的a分支实际上是两个不同的分支。 因此,在实际使用时,不建议这样的操作,这样会带来命名上的疑惑。
2.6.3 删除分支 – git-branch –D
git-branch –D 分支名可以删除分支,但是需要小心,删除后,发生在该分支的所有变化都无法恢复。
2.6.4 切换分支 – git-checkout 分支名
如果分支已经存在, 可以通过 git-checkout 分支名 来切换工作分支到该分支名
2.6.5 查看分支历史 –git-show-branch
调用该命令可以查看分支历史变化情况。 如:
* [dev1] d2
! [master] m2
--
* [dev1] d2
* [dev1^] d1
* [dev1~2] d1
*+ [master] m2
在上述例子中, “--”之上的两行表示有两个分支dev1和master, 且dev分支上最后一次提交的日志是“d2”,master分支上最后一次提交的日志是”m2”。 “--”之下的几行表示了分支演化的历史,其中 dev1表示发生在dev分支上的最后一次提交,dev^表示发生在dev分支上的倒数第二次提交。dev1~2表示发生在dev分支上的倒数第三次提交。
2.6.6 合并分支 – git-merge
git-merge的用法为:git-merge “some memo” 合并的目标分支 合并的来源分支。如:
git-merge master dev1~2
如果合并有冲突,git会由提示,当前,git-merge已经很少用了, 用git-pull来替代了。
用法为:git-pull 合并的目标分支 合并的来源分支。 如git-pull . dev1^

2.7 远程获取一个git库 git-clone
在2.1节提到过,如果你不是一个代码模块的发起者,也不会使用到git-init命令,而是更多的是使用git-clone。通过这个命令,你可以从远端完整获取一个git库,并可以通过一些命令和远端的git交互。
基于git的代码管理的组织结构,往往形成一个树状结构,开发者一般从某个代码模块的管理者的git库通过git-clone取得开发环境,在本地迭代开发后,再提交给该模块的管理者,该模块的管理者检查这些提交并将代码合并到自己的库中,并向更高一级的代码管理者提交自己的模块代码。
对于酷讯来说,公司会有一个中心的git库, 大家在开发时,都是从中心库git-clone获取最新代码。
git-clone的使用方法如下: git-clone [ssh://][email protected]:path。 其中, “ssh://”可选,也有别的获取方式,如rsync。 Path是远端git的根路径,也叫repository。
通过git-clone获取远端git库后,.git/config中的开发者信息不会被一起clone过来。仍然需要为.git/config文件添加开发者信息。此外,开发者还需要自己添加. gitignore文件
另外,通过git-clone获取的远端git库,只包含了远端git库的当前工作分支。如果想获取其它分支信息,需要使用”git-branch –r” 来查看, 如果需要将远程的其它分支代码也获取过来,可以使用命令” git checkout -b 本地分支名 远程分支名”,其中,远程分支名为git-branch –r所列出的分支名, 一般是诸如“origin/分支名”的样子。如果本地分支名已经存在, 则不需要“-b”参数。

2.8 从远程获取一个git分支 – git-pull
与git-clone不同, git-pull可以从任意一个git库获取某个分支的内容。用法如下:
git-pull [email protected]: 远端repository名 远端分支名:本地分支名。这条命令将从远端git库的远端分支名获取到本地git库的一个本地分支中。其中,如果不写本地分支名,则默认pull到本地当前分支。
需要注意的是,git-pull也可以用来合并分支。 和git-merge的作用相同。 因此,如果你的本地分支已经有内容,则git-pull会合并这些文件,如果有冲突会报警。

2.9 将本地分支内容提交到远端分支 – git-push
git-push和git-pull正好想反,是将本地某个分支的内容提交到远端某个分支上。用法:
git-push [email protected]: 远端repository名 本地分支名:远端分支名。这条命令将本地git库的一个本地分支push到远端git库的远端分支名中。

需要格外注意的是,git-push好像不会自动合并文件。这点我的试验表明是这样,但我不能确认是否是我用错了。因此,如果git-push时,发生了冲突,就会被后push的文件内容强行覆盖,而且没有什么提示。 这在合作开发时是很危险的事情。
2.10 库的逆转与恢复 – git-reset
库的逆转与恢复除了用来进行一些废弃的研发代码的重置外,还有一个重要的作用。比如我们从远程clone了一个代码库,在本地开发后,准备提交回远程。但是本地代码库在开发时,有功能性的commit,也有出于备份目的的commit等等。总之,commit的日志中有大量无用log,我们并不想把这些log在提交回远程时也提交到库中。 因此,就要用到git-reset。
Git-reset的概念比较复杂。它的命令形式:git-reset [--mixed | --soft | --hard] [<commit-ish>]
命令的选项:
--mixed
这个是默认的选项。 如git-reset [--mixed] dev1^(dev1^的定义可以参见2.6.5)。它的作用仅是重置分支状态到dev1^, 但是却不改变任何工作文件的内容。即,从dev1^到dev1的所有文件变化都保留了,但是dev1^到dev1之间的所有commit日志都被清除了,而且,发生变化的文件内容也没有通过git-add标识,如果您要重新commit,还需要对变化的文件做一次git-add。 这样,commit后,就得到了一份非常干净的提交记录。
--soft
相当于做了git-reset –mixed,后,又对变化的文件做了git-add。如果用了该选项, 就可以直接commit了。
--hard
这个命令就会导致所有信息的回退, 包括文件内容。 一般只有在重置废弃代码时,才用它。 执行后,文件内容也无法恢复回来了。

2.11 更多的操作
之前的10节只简要介绍了git的基本命令,更多的细节可以去linux下man git的文档。此外,http://www.linuxsir.org/main/doc/git/gittutorcn.htm 也有不少更详细的介绍。

3. 基于git的合作开发
对于酷讯来说,当我们采用了Git,如何进行合作开发呢? 具体步骤如下:
3.1 获取最新代码

酷讯会准备一个中心git代码库。首先,我们将整理好的代码分模块在git中心库中建立git库。并将文件add到中心库中。 接下来,开发者通过git-clone将代码从中心库clone到本地开发环境。
对于较大的项目,我们还建议每个组选择一个负责人,由这个负责人负责从中心库获取和更新最新的代码,其它开发者从这个负责人的git代码库中clone代码。此时,对开发者来说,这个负责人的git库就是中心库了。

3.2 开发者在本地进行迭代开发

当用户将代码clone到本地后, 就可以进行本地的迭代开发,建议用户不要在master分支上开发,而是建立一个开发分支进行开发。 在本地开发中,用户可以随意的创建临时分支,随意commit。

3.3 开发者请其它同事进行code review

当本地开发完毕,可以请其它同事进行code review。过程为:
1. user2通通过git-pull命令,将开发者(user1)的开发分支(dev)pull到user2本地的一个tmp分支,并切换工作分支到该分支上进行code review。
2. 完成code review后, user2切换回其原有开发分支继续开发,并告知user1已经修改完毕。
3. User1将user2的tmp分支git-pull到本地tmp分支,并和dev分支进行merge。最终得到一个code review后的dev分支。
当然,user2也可以直接坐在user1旁边在他的代码上进行review。而不需要走上述步骤。(图中第7步,不是git-pull,而是直接在dev分支上和user1边review边modify)

3.4 和中心库进行代码合并

使用过CVS的人都知道, 在commit之前,都要做一次cvs update,以避免和中心库冲突。Git也是如此。
现在我们已经经过了code review, 准备向中心库提交变化了, 在开发的这段时间,也许中心库发生了变化, 因此,我们需要在向中心库提交前,再次将中心库的master分支git-pull到本地的master分支上。并且和dev分支做合并。最终,将合并的代码放入master分支。
如果开发过程提交日志过多,可以考虑参照2.10节的介绍做一次git-reset。
此外,如果发现合并过程变化非常多, 出于代码质量考虑,建议再做一次code review

3.5 提交代码到中心库

此时,已经完全准备好提交最终的代码了。 通过git-push就可以了。

3.6 合作流程总结
大家可以看到,使用git进行合作开发,这一过程和CVS有很多相似性,同时,增强了以下几个环节:
1. 开发者在本地进行迭代开发,可以经常的做commit操作且不会影响他人。 而且即使不在线也可以进行开发。只需要最后向中心库提交一次即可。
2. 大家都知道,如果CVS管理代码,由于我们会常常做commit操作。但是在commit之前cvs update时常会遇到将中心库上的其它最新代码checkout下来的情况,此时,一旦出现问题,就很难确认到底是自己开发的bug还是其它用户的代码带来了影响。 而使用git则避免了用户间的开发互相影响。
3. 更有利于在代码提交前做code review。 以往用cvs, 都是代码提交后才做code view。如果发生问题, 也无法避免服务器上有不好的代码。 但是用git, 真正向中心库commit前,都是在本地开发,可以方便的进行code review, 然后才提交到中心库。更有利于代码质量。而且, 大家应该可以感到,使用git的过程中,更容易对代码进行code review,因为影响因素更小。
4. 创建多分支,更容易在开发中进行多种工作,而使工作间不会互相影响。 比如user2对user1的代码进行code review时,就可以非常方便的保留当时的开发现场,并切换到user1的代码分支,在code review完毕后,也可以非常方便的切换会曾经被中断的工作现场。

诚然,带来这些好处的同时,确实也使得操作比CVS复杂了一些。但我们觉得和前面所能获得的好处相比,这些麻烦是值得的。 当大家用惯了之后会发现,这并不增加多大的复杂性, 而且开发流程会更加自然。

请大家多动手,多尝试! 去体验git的魅力所在吧!let’s enjoy it!

2009年11月25日星期三

在Ubuntu 8.04上加装iptables和fail2ban提高安全性

1. 安装iptables


1)apt-get install iptables

因为iptable不需要启动脚本,规则一旦设置,立即生效,关机后有自动清零。所以远程登录时,更改设置要小心,切勿将默认值全都改为Drop,以免丢失SSH连接。编写以下脚本可以便于修改,调试。

2)然后可以在用户目录下编写如下脚本 ~/iptables-init.sh (假设服务器对外网卡的设备号是eth0


iptables -A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT #保证已经打开的Session都有效,对于远程调试登录的情况下起保护作用,因为iptables的改动是实时生效的
iptables -A INPUT -i lo -j ACCEPT #打开Loopback, 及localhost, 127.0.0.1
iptables -A INPUT -p tcp -i eth0 –dport ssh -j ACCEPT #允许SSH远程登录
iptables -A INPUT -p tcp -i eth0 –dport 80 -j ACCEPT #允许外界访问www服务器
iptables -A INPUT -p tcp -i eth0 –dport 21 -j ACCEPT #允许外界访问ftp服务器
iptables -A INPUT -j DROP
iptables-save > /etc/iptables.rules

然后赋予脚本执行权限 chmod +x ./iptables-init.sh
运行脚本 ./iptables-init.sh

3)设置网卡启动时加载防火墙规则 (这步要很小心,否则网卡加载失败,系统就无法登录了)

最好先用
iptables-restore </etc/iptables.rules
iptables -L -v 仔细检查下数据,必要时,重新开一个putty窗口,验证以下是否还能登录。
然后修改脚本/etc/network/interfaces,使系统能自动应用这些规则,最后一行就是要添加的,加载文件应该和上面脚本中保存规则的文件名一致,并且要用绝对路径。
auto eth0
iface eth0 inet dhcp
pre-up iptables-restore < /etc/iptables.rules

注意:之所以在加载网卡设备后,刷新iptables列表,
而不是象有些地方建议的在/etc/rc.local或是其他地方放置iptables-init这样的脚本,是因为自网卡启动后到系统执行rc.local仍然有一段时间,会使防火墙无效,所以以上的方法是最好的。

本来为了是FTP能够支持PASV模式,方便防火墙后面的客户端连接服务器,可以在何时的启动脚本里执行
modprobe ip_conntrack_ftp
但是Linnode上的Ubuntu内核编译时,并没有带上这个模块,所以只得放弃,好在PASV模式的安全性也不如Active模式(Port模式)
参考文章
1. Ubuntu 服务器版 Iptables 基本设置指南

2. 安装fail2ban


1) apt-get install fail2ban

2) 查看/etc/fail2ban/jail.conf,核对一下logpath = /var/log/auth.log或/var/log/vsftpd.log的路径是否正确。然后编辑/etc/fail2ban /jail.local, 打开以下开关: (这样可以避免升级时,开关被关闭)

[DEFAULT]
bantime=7200 #至少封2小时IP
[ssh]
enabled = true
filter = sshd
[vsftpd]
enabled = true
filter = vsftpd

查看/etc/vsftpd.conf, 确保以下配置正确,如果修改了,需要重启ftp服务,/etc/init.d/vsftpd restart。fail2ban虽然也可以和Wu-FTP配合,然考虑到思维一致性,我们上边只配了vsftpd的过滤,只能识别vsftpd的日志, 所以vsftpd的日志也要采取其自身的格式:

xferlog_enable=YES
vsftpd_log_file=/var/log/vsftpd.log

确保以下两行被注释掉,否则log的格式是wu-ftpd 不能被fail2ban识别!!
xferlog_file=/var/log/vsftpd.log
xferlog_std_format=YES

然后/etc/init.d/fail2ban start 启动服务就可以了。
fail2ban每次都会在iptables的INPUT Chain中添加一条记录,专门指向fail2ban的一条Chain, 用来存放封禁ip的记录。 所以/etc/iptables.rules只要独自管理静态规则就可以了

3. FTP客户端的设置


FTP的客户端要复杂一些,因为PORT模式下需要客户端来告诉FTP服务器自己的公网IP,如果用户自己也在防火墙后面的局域网里,一般的客户端 是做不到的。很有可能会告诉FTP服务器,Port 192.168.0.x 这样的指令。结果得到550 illegal port command的信息。但是FileZilla最新版可以选择利用http://ip.filezilla-project.org/ip.php获知 IP,这个问题就迎刃而解了

4. 查看被封禁的IP


fail2ban-client status ssh -iptables
fail2ban-client status vsftpd  -iptables

Windows mobile 一些常用设置

2008-11-06 15:24


1.来电铃声
铃声放在windows下的rings目录下面,或者在储存卡里建立一个My Documents(注意大小写)把mp3铃声放里面就好了来电铃声支持的格式有mp3 wmv

2.短信铃声
短信的直接放在机器里的Windows下就可以了,支持wav格式

3.WM5 解决死屏、电池大量消耗的解決方法
WM5 解决死屏、电池大量消耗完全是因为机器內置的Activesync造成的。因为微软公司在设计ActiveSync的时候,有一个非常愚蠢的想法,就是 ActiveSync, 不管你的PDA是否在联机,总是自动启动,去寻找需要同步的对象。而且手动关闭 ActiveSync,它在5分钟后又会重新启动﹗
我们通过设置可解决此问题︰
1、 开启PDA
2、 进入开始 > 程序 > Activesync. 打开Activesync。
3、 点击屏幕最右下角的菜单。注意,在设置这个菜单中, 日程安排不能更改的 。而这一选项,正是   问题所在。

现在,骗过ActiveSync,来更改日程安排。

1、在Activesync的菜单里点击添加源服务器。
2、在下面的项目中,可以填入任何东东。服务器地址、用户名、密码、域均任意 添,下一步后选择要同步的数据全部去掉∨
3、现在已添加了一个所谓的“交换服务器”。 现在Activesync的主屏幕上可以看 到一个新的连接, Exchange 服务器
4、在Activesync的主屏幕上点击“菜单”,可以看到日程安排的选项可以点击了 ﹗
5、如果你用的是A01系统,看到的高峰时间是每5分钟,A06系统的是每10分钟。 这就是问题所在﹗
6、把高峰时间和非高峰时间都改为手动.
7、点击OK。
8、现在我们可将Exchange 服务器删掉。回到Activesync主屏幕点菜单>选项>删 除Exchange 服务器>选是。(结束)

4.重启和硬起
重启:就是手机系统重新启动,按住电源按键(储存卡旁边的)5秒,选择“是”然后再按电源键。
硬起:指手机恢复到出厂设置,只要是手机上的,原来安装的软件和 手机里的联系人等,就统统不见了。效果和恢复出厂设置是一样的,只是一个是硬操作,一个是软操作,当不能正常进入手机系统时,我们的一般做法,就是硬起。 左边录音键+右边语音命令键+中间确认键+重起RESET键(按一下便松开),此动作持续30秒以上,屏幕出现确认窗口后,按后按send拨号键 。如能进入系统,开始-设置-系统-清除内存,即可。

5.注册表修改
可用软件“注册表编辑器”来操作   修改后重启方能生效!

1、改变标题栏时间显示信息(格式):
HKEY_LOCAL_MACHINE\Software\Microsoft\Shell   下新建DWORD值,名字为TBOpt,
=0时不显示任何日期时间信息;
=1时仅显示时间;
=2时仅显示日期;
=3时同时显示日期和时间,如截图。

2、去掉初次运行程序时的安全警告:
HKEY_LOCAL_MACHINE\Security\Policies\Policies\0000101a
= 1时不显示警告信息;
=0时恢复显示。

3、改变屏幕最下面那两个触摸软键的功能:
左键:HKEY_CURRENT_USER\Software\Microsoft\Today\Keys\112\Open ="\Windows\Calendar.exe"(功能)
default="日历"(显示文本信息)
右键:HKEY_CURRENT_USER\Software\Microsoft\Today\Keys\113\Open = "\Windows\“开始”菜单\Programs\Contacts.lnk"
default="联系人"(显示文本信息)

4、给程序指定GPS端口设置:
修改后重启,会在设置/连接里面增加GPS设置选项,如截图
增加字键及值:HKEY_LOCAL_MACHINE\ControlPanel\GPS Settings\Group = 2   字键类型:DWORD
删除或改名:HKEY_LOCAL_MACHINE\ControlPanel\GPS Settings\redirect

5、如果你连接的GPRS网络支持EDGE,那么标题栏栏上会显示“E”代替原来的“G”,此项改动不影响实际连接的GPRS网络类型,只是告诉你所连接的网络是普通的GPRS还是EDEG而已。(EDGE也称2.75G,比普通GPRS更快)同6,参看下面截图
HKEY_LOCAL_MACHINE\Drivers\BuiltIn\RIL\EnableDifferGprsEdgeIcon
= 1 为根据实际显示;
=0 为一律显示为“G”

6、GPRS连接以后,点击标题栏连接图标“G”或“E”出现的提示框中增加“断开”按钮和实际累计连接时间信息
HKEY_LOCAL_MACHINE\ControlPanel\Phone\Flags2
= 16增加“断开”按钮和累计连接时间显示(16为16进制值);
=0两者都不显示;
=8为增加“断开”按钮

7、改变日期时间显示格式,如截图,不过这样改了,其它应用软件的短日期格式如Resco explorer显示文件信息时也会跟着改变。
HKEY_LOCAL_MACHINE\nls\overrides\SSDte
=ddd/d   为星期/日期

8、修改铃声路径
HKEY_CURRENT_USER\ControlPanel\SoundCategories\Ring\Directory = \Storage Card\Mymusic
不赞成改在卡上。尽量放在机子上。

9、CAB格式安装文件安装了以后避免被系统自动删除的又2个方法:
HKEY_LOCAL_MACHINE\Software\apps\Microsoft Application Installer\nDynamicDelete
= 0 不自动删除;
= 2 默认自动删除

HKEY_CLASSES_ROOT\cabfile\Shell\open\command
=wceload.exe "%1" /nodelete 不自动删除;
=wceload.exe "%1" 默认自动删除

10、重复安装应用程序时是否提示重新覆盖安装:
HKEY_LOCAL_MACHINE\Software\apps\Microsoft Application Installer\fAskOptions
= 1 提示;
= 0 不提示

11、在桌面“今日”中增加无线网卡(WiFi)设置快捷方式,就象蓝牙快捷方式一样,仅仅适用于有WiFi的838、830等机子,效果待评估。不过已验证过,Windows下有netui.dll文件
HKEY_LOCAL_MACHINE\Software\Microsoft\Today\Items\"Wireless"
DLL=netui.dll
Order=0
Enabled=1
Type=4
Options=1

12、同步联机时取消边联机边充电充电
HKEY_LOCAL_MACHINE\Drivers\BuiltIn\usbfndrv\EnableUsbCharging
= 1充电;
= 0不充电

13、开机后是否保持GPRS连接状态:
HKEY_LOCAL_MACHINE\Comm\ConnMgr\Providers\{7C4B7A38-5FF7-4bc1-80F6-5DA7870BB1AA}\Connections\|connection name|\AlwaysOn
= 1 总是连接;
=0 开机不自动连接
connection name是指手机上具体设定的连接名称,如中国移动CMWAP,根据实际情况不同而不同;而且会把所有连接都显示出来,可以删除不用的连接,等同于在设置面板上设定或删除。

14、屏蔽设置中“清除内存”等危险选项方法
WM5.0的“清除内存”其实就是硬启动,不用说,很危险,尤其是你的朋友好奇借你的手机来玩的时候。。。。。
HKEY_LOCAL_MACHINE\ControlPanel\
这 下面的大多是手机设置里面的选项,大多数选项下都有一个字键:Group,当它等于0时,出现在“个人”栏里;等于1时出现在“系统”栏里;等于2时出现 在“连接”栏里;大于2时,就哪里都不出现!所以修改它比直接删除对应的CPL文件要好得多--哪天要用把它改过来就行了!所以,要屏蔽“清除内存”,就 这样改好了:
HKEY_LOCAL_MACHINE\ControlPanel\Clear Storage\Group
=3

15,838改回水货键盘YZ位置及标点符号方法
HKEY_CURRENT_USER\ControlPanel\Keybd,新建字符串"Locale",键值为"0407",软启后即可.如遇只能改YZ键位置而标点符号仍未改回的请再进入注册表,查询
新建字符串"Locale"键值是否为"0407",如为"0804"请改为"0407",软启后可成功改回.

16,让联系人中不显示sim联系人的方法
在wm5系统下,联系人里面总是显示sim联系人,导致联系人重复,用修改注册表的方法可以解决
打开注册表 : HKCU\ControlPanel\Phone
新建双字节值   键名   ShowSim   键值   DWORD:0     (10进制) (1 为显示sim联系人)
保存即可(也许有的机器要重启)

6.WM5系统下的文件夹骨架
Windows Mobile 5.0和微软的Windows 系统一脉相承,自然有自己严谨的文件系统,通过自带的File Manager可以自由访问存储的档。那么我们也就来认识一下文件系统的骨架。

\Storage\Program Files  程序文件夹
\windows\      手机系统文件夹
\Storage\Application Data\Home  桌面主题文件夹
\Storage\Application Data\Sounds 来电铃声活页夹
\Storage\Application Data\phonering 短信声音文件夹
\Storage\My Documents\Mobiclip     FLASH动画活页夹
\Storage\My Documents\MovieAlbum   手机录像文件夹
\Storage\My Documents\Notes     手机录音活页夹(在 WINDOWS 下打开隐藏选项就可看到)
\Storage\My Documents\PhotoAlbum  手机所拍照片活页夹
\Storage\windows\AppMgr       临时存档档夹
\Storage\windows\AppMgr\instal    下载到手机但没安装的档夹。平时没空间了大多是这里有档没安装,删掉或安装掉即可
\Storage\windows\Frames  相片外框活页夹
\Storage\windows\PhotoID  来电相片外框活页夹
\Storage\windows\Favorites  IE收藏夹
\Storage\windows\Start Menu  开始快捷方式存放档夹
\Storage\windows\Start\startup  开机自动运行的程序文件夹   [CapNotify.Init_tray.MemoryShow.poutlook.WiFiInit]
\Storage\windows\Start Menu\Accessories 手机开始菜单里的附件文件夹
\Storage\windows\Start Menu\Games  手机开始菜单里的游戏文件夹

[软件心得] [HTC TyTN II/Kaiser/P4550/O2 XDA Stella] Kaiser经验集中贴

一、基础知识

1、硬件规格

http://ping.pdafans.com/story.php?product_id=504&stime=1233504000&etime=1233590399

2、产品手册

Kaiser使用入门手册:

Kaiser使用手册:

3、硬启动方法

4、购买咨询:
http://bbs.pdafans.com/forum-201-1.html

二、疑难经验

发个kaiser等机型WM6.1下能用的TCPMP V0.72RC1简体中文绿色版,集成FLV插件0.43。       2009-1-12 21:46
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Kaiser Wm6.1上能用的强大短信《手机密使》推出了,无法在WM6上使用的同学有喜了         2008-9-28 20:50
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

美版TytnII Kaiser 凯撒 Tilt 键盘错位完美解决补丁           2008-9-5 22:44
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

[我分享]以我的Kaiser为例 100%解决samsung omnia 侧插件不显示不运行的问题          2008-8-28 02:18
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒ROM制作教程~!让人人都拥有自己的ROM             2008-8-26 00:43
http://bbs.pdafans.com/viewthrea ... hlight=%E2%FD%C8%F6

恺撒网络锁详细解锁教程          2008-4-12 13:23
http://bbs.pdafans.com/viewthrea ... hlight=%E2%FD%C8%F6

搞定KAISER 蓝牙语音拨号,带程序和教程(申精)         2008-3-2 23:59
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

HTC TyTN II Kaiser(恺撒)刷机/软件/游戏 集中贴          2008-2-27 12:32
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

发个原创:我内置到HTC Kaiser ROM里面的开机设置工具          2008-2-27 02:27 |
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Kaiser/TYTN II/T MDA 输入法的问题 (已经解决了)       2007-12-14 14:02
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Kaiser (TyTN II) 资源汇总贴 !! Radio 更新至1.71.09.01         2007-10-7 13:12
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

三、评价感受

三星i900 开箱 屏幕简测 diamond比较测试(转)24 楼和HTC Kaiser的对比 相机静音补丁    2008-9-24 03:19
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

我的『HTC TYTN 』+『HTC Kaiser』女生桌面成长日记 — 可以实用 | 才可以可爱         2008-8-13 08:27
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Kaiser QSKOM 6.0非专业测评~                  2008-6-4 23:19
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Kaiser实测参考~不含任何劝败或不劝败因素~           2008-5-24 16:27
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒 TyTN II kaiser P4550 入手作业,超详细。。。。           2008-5-6 20:09
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

机型比较--HTC TyTN II (Kaiser) 对比 O2 Atom Life -- By Nick Sun         2007-12-7 20:55
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

四、ROM相关

注意:刷机有风险,动手需谨慎;这里列出的链接并不表明推荐或者经过试验确认没有风险;请刷机前三思,一切后果自负!

更新!Kaiser_Build 21109刷机ROM双版本(1.95 / 3.57驱动,GPS定位超快)           2009-1-11 00:05
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

发布P4550凯撒RUU_kaiser_XV360 1.0版ROM修正(增纯净版)               2009-1-3 02:02
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

KAISER用21109元旦版模版双版本(1.65和1.27Radio),要的快进!          2009-1-3 00:03
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

发布Kaiser 21109 相对纯净版1.65以上radio用                     2009-1-2 23:44
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒_拖六_20954_20954_M2D雅黑(20081231晚)^_^        2008-12-31 23:59
http://bbs.pdafans.com/viewthrea ... hlight=%E2%FD%C8%F6

99kaiser v2 21109 Build 21109 AKU5.0.0元旦好礼不断啊          2008-12-31 09:03
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒_拖五_20924_20924_M2D雅黑+无马宋体+模版(20081225傍晚)^_^           2008-12-23 00:55
http://bbs.pdafans.com/viewthrea ... hlight=%E2%FD%C8%F6

(转自52dopod12-16RUU_Kaiser_Tinni_CHS_20755最终修改版(D3D+横屏ClearType)          2008-12-19 20:54
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

发布本人制作的kaiser20931适度集成版rom(2008.12.06发布20931第二版)         2008-11-22 22:02
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒20931_20931_M2D雅黑+HTChome宋体光秃秃_拖拉机综四(20081130晚上)^_^           2008-11-14 11:46
http://bbs.pdafans.com/viewthrea ... hlight=%E2%FD%C8%F6

个人感觉KAISER最稳定最快的ROM                  2008-10-18 14:26
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒Kaiser_20721_20721_拖拉机综三(雅黑带梅花+宋体无梅花)(0903下午)^_^           2008-8-30 22:59
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

(恺撒 kaiser/TyTN II) kaiser_20273.1.3.3_CHS解决已知问题2008.08.22凌晨^_^|||         2008-8-21 23:22
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒Kaiser(TyTN II) ROM_WM6.1_20270_20273_CHS_拖拉机综二(20080822凌晨)^_^       2008-8-18 00:04
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

恺撒Kaiser(TyTN II) ROM_WM6.1_19214_20270_CHS_拖拉机综一(20080725凌晨)^_^         2008-7-25 12:09
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Kaiser HardSPL 3.28 『含新版刷新及降回 HardSPL 1.0 教程』       2008-6-3 10:34
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

HTC Kaiser 6.1 19212 官方 2008.05.23 开放 ROM 下载 『全新驱动』          2008-5-22 11:03
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

HTC KAISER - PDAVIET ROM WM6.1 (Build 19701.1.1.0){5月14日更新宋体修改版}           2008-5-7 12:31
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

DFT QooQoo Kaiser CHS 19400.1.0.0简体中文ROM DFT发布              2008-4-23 18:14
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

基于Kaiser_3.08.161_build19199官方原版简体中文修改(4月30日更新宋体版)        2008-4-2 09:19
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

HTC TyTN II ,Kaiser 简体中文ROM18538.0.7.0 QSKOM 2.4           2008-1-18 22:41
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

[QSKOM]kaiser简体中文rom,官方18128内核           2007-11-12 04:03
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

迎接kaiser刷机时代的到来          2007-10-7 03:08
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Kaiser 官方正式发布!!!Hermes 的升级版!!!官方说明书下载!!!           2007-8-29 09:23
http://bbs.pdafans.com/viewthrea ... mp;highlight=Kaiser

Manual dla 7z

P7ZIP(1) P7ZIP(1)

NAME
7-Zip - A file archiver with highest compression ratio

SYNOPSIS
7z [adeltux] [-] [SWITCH] <<ARCHIVE_NAME>> <<ARGUMENTS>>......

DESCRIPTION
7-Zip is a file archiver with the highest compression ratio. The pro-
gram supports 7z (that implements LZMA compression algorithm), ZIP,
CAB, ARJ, GZIP, BZIP2, TAR, CPIO, RPM and DEB formats. Compression
ratio in the new 7z format is 30-50% better than ratio in ZIP format.

7z uses plugins to handle archives.

FUNCTION LETTERS
a Add

d Delete

e Extract

l List

t Test

u Update

x eXtract with full paths

SWITCHES
-ai[r[-||0]]{{@@listfile||!!wildcard}
Include archives

-ax[r[-||0]]{{@@listfile||!!wildcard}
eXclude archives

-bd Disable percentage indicator

-i[r[-||0]]{{@@listfile||!!wildcard}
Include filenames

-l don't store symlinks; store the files/directories they point to
(CAUTION : the scanning stage can never end because of recursive
symlinks like 'ln -s .. ldir')

-m{{Parameters}
Set Compression Method

-o{{Directory}
Set Output directory

-p{{Password}
Set Password

-r[-||0]
Recurse subdirectories (CAUTION: this flag does not do what you
think, avoid using it)

-sfx[{{name}]
Create SFX archive

-si Read data from StdIn (eg: tar cf - directory | 7z a -si direc-
tory.tar.7z)

-so Write data to StdOut (eg: 7z x -so directory.tar.7z | tar xf -)

-slt Sets technical mode for l (list) command

-t{{Type}
Type of archive

-v{{Size}[b||k||m||g]
Create volumes

-u[-][p##][q##][r##][x##][y##][z##][!!newArchiveName]
Update options

-w[path]
Set Working directory

-x[r[-||0]]]{{@@listfile||!!wildcard}
Exclude filenames

-y Assume Yes on all queries

Backup and limitations
DO NOT USE the 7-zip format for backup purpose on Linux/Unix because :
- 7-zip does not store the owner/group of the file.

On Linux/Unix, in order to backup directories you must use tar :
- to backup a directory : tar cf - directory | 7za a -si direc-
tory.tar.7z
- to restore your backup : 7za x -so directory.tar.7z | tar xf -

If you want to send files and directories (not the owner of file) to
others Unix/MacOS/Windows users, you can use the 7-zip format.

example : 7za a directory.7z directory

Do not use "-r" because this flag does not do what you think.

Do not use directory/* because of ".*" files (example : "directory/*"
does not match "directory/.profile")

EXAMPLE 1
7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=32m -ms=on archive..7z dir1
adds all files from directory "dir1" to archive archive.7z using
"ultra settings"

-t7z 7z archive

-m0=lzma
lzma method

-mx=9 level of compression = 9 (Ultra)

-mfb=64
number of fast bytes for LZMA = 64

-md=32m
dictionary size = 32 megabytes

-ms=on solid archive = on

EXAMPLE 2
7z a -sfx archive..exe dir1
add all files from directory "dir1" to SFX archive archive.exe
(Remark : SFX archive MUST end with ".exe")

MORE EXAMPLES
You will find more examples in /usr/share/doc/p7zip/DOCS/MANUAL.

SEE ALSO
7za(1) 7zr(1)

AUTHOR
Written for Debian by Mohammed Adnene Trojette.

Mohammed Adnene Trojette October 31 2004 P7ZIP(1)

http://manual.aun.pl/7z.html

File manipulation



This is a guide to basic file manipulation in OCaml using only what the standard library provides.

Official documentation for the modules of interest: Pervasives, Printf.

The standard library doesn't provide functions that directly read a file into a string or save a string into a file. Such functions can be found in third-party libraries such as Extlib. See Std.input_file and Std.output_file.


Buffered channels


The normal way of opening a file in OCaml returns a channel. There are two kinds of channels:

  • channels that write to a file: type out_channel

  • channels that read from a file: type in_channel


Writing


For writing into a file, you would do this:

  1. Open the file to obtain an out_channel

  2. Write stuff to the channel

  3. If you want to force writing to the physical device, you must flush the channel, otherwise writing will not take place immediately.

  4. When you are done, you can close the channel. This flushes the channel automatically.


Commonly used functions: open_out, open_out_bin, flush, close_out, close_out_noerr

Standard out_channels: stdout, stderr

Reading


For reading data from a file you would do this:

  1. Open the file to obtain an in_channel

  2. Read characters from the channel. Reading consumes the channel, so if you read a character, the channel will point to the next character in the file.

  3. When there are no more characters to read, the End_of_file exception is raised. Often, this is where you want to close the channel.


Commonly used functions: open_in, open_in_bin, close_in, close_in_noerr

Standard in_channel: stdin

Seeking


Whenever you write or read something to or from a channel, the current position changes to the next character after what you just wrote or read. Occasionally, you may want to skip to a particular position in the file, or restart reading from the beginning. This is possible for channels that point to regular files, use seek_in or seek_out.

Gotchas



  • Don't forget to flush your out_channels if you want to actually write something. This is particularly important if you are writing to non-files such as the standard output (stdout) or a socket.

  • Don't forget to close any unused channel, because operating systems have a limit on the number of files that can be opened simultaneously. You must catch any exception that would occur during the file manipulation, close the corresponding channel, and re-raise the exception.

  • The Unix module provides access to non-buffered file descriptors among other things. It provides standard file descriptors that have the same name as the corresponding standard channels: stdin, stdout and stderr. Therefore if you do open Unix, you may get type errors. If you want to be sure that you are using the stdout channel and not the stdout file descriptor, you can prepend it with the module name where it comes from: Pervasives.stdout. Note that most things that don't seem to belong to any module actually belong to the Pervasives module, which is automatically opened.

  • open_out and open_out_bin truncate the given file if it already exists! Use open_out_gen if you want an alternate behavior.


Example




open Printf

let file = "example.dat"
let message = "Hello!"

let _ =

(* Write message to file *)
let oc = open_out file in (* create or truncate file, return channel *)
fprintf oc "%s\n" message; (* write something *)
close_out oc; (* flush and close the channel *)

(* Read file and display the first line *)
let ic = open_in file in
try
let line = input_line ic in (* read line from in_channel and discard \n *)
print_endline line; (* write the result to stdout *)
flush stdout; (* write on the underlying device now *)
close_in ic (* close the input channel *)

with e -> (* some unexpected exception occurs *)
close_in_noerr ic; (* emergency closing *)
raise e (* exit with error: files are closed but
channels are not flushed *)

(* normal exit: all channels are flushed and closed *)


HOWTO do Linux kernel development

Everything you ever wanted to know about Linux kernel proceedures.

This is the be-all, end-all document on this topic. It contains instructions on how to become a Linux kernel developer and how to learn to work with the Linux kernel development community. It tries to not contain anything related to the technical aspects of kernel programming, but will help point you in the right direction for that.

If anything in this document becomes out of date, please send in patches to the maintainer of this file, who is listed at the bottom of the document.

Introduction


So, you want to learn how to become a Linux kernel developer? Or you have been told by your manager, "Go write a Linux driver for this device." This document's goal is to teach you everything you need to know to achieve this by describing the process you need to go through, and hints on how to work with the community. It will also try to explain some of the reasons why the community works like it does.

The kernel is written mostly in C, with some architecture-dependent parts written in assembly. A good understanding of C is required for kernel development. Assembly (any architecture) is not required unless you plan to do low-level development for that architecture. Though they are not a good substitute for a solid C education and/or years of experience, the following books are good for, if anything, reference:

  • "The C Programming Language" by Kernighan and Ritchie [Prentice Hall]

  • "Practical C Programming" by Steve Oualline [O'Reilly]


The kernel is written using GNU C and the GNU toolchain. While it adheres to the ISO C89 standard, it uses a number of extensions that are not featured in the standard. The kernel is a freestanding C environment, with no reliance on the standard C library, so some portions of the C standard are not supported. Arbitrary long long divisions and floating point are not allowed. It can sometimes be difficult to understand the assumptions the kernel has on the toolchain and the extensions that it uses, and unfortunately there is no definitive reference for them. Please check the gcc info pages (info gcc) for some information on them.

Please remember that you are trying to learn how to work with the existing development community. It is a diverse group of people, with high standards for coding, style and procedure. These standards have been created over time based on what they have found to work best for such a large and geographically dispersed team. Try to learn as much as possible about these standards ahead of time, as they are well documented; do not expect people to adapt to you or your company's way of doing things.

Legal Issues


The Linux kernel source code is released under the GPL. Please see the file, COPYING, in the main directory of the source tree, for details on the license. If you have further questions about the license, please contact a lawyer, and do not ask on the Linux kernel mailing list. The people on the mailing lists are not lawyers, and you should not rely on their statements on legal matters.

For common questions and answers about the GPL, please see: http://www.gnu.org/licenses/gpl-faq.html

Documentation


The Linux kernel source tree has a large range of documents that are invaluable for learning how to interact with the kernel community. When new features are added to the kernel, it is recommended that new documentation files are also added which explain how to use the feature. When a kernel change causes the interface that the kernel exposes to userspace to change, it is recommended that you send the information or a patch to the manual pages explaining the change to the manual pages maintainer at [email protected]

Here is a list of files that are in the kernel source tree that are required reading:


  • README

    This file gives a short background on the Linux kernel and describes what is necessary to do to configure and build the kernel. People who are new to the kernel should start here.



  • Documentation/Changes

    This file gives a list of the minimum levels of various software packages that are necessary to build and run the kernel successfully.



  • Documentation/CodingStyle

    This describes the Linux kernel coding style, and some of the rationale behind it. All new code is expected to follow the guidelines in this document. Most maintainers will only accept patches if these rules are followed, and many people will only review code if it is in the proper style.



  • Documentation/SubmittingPatches



  • Documentation/SubmittingDrivers

    These files describe in explicit detail how to successfully create and send a patch, including (but not limited to):

    • Email contents

    • Email format

    • Who to send it to


    Following these rules will not guarantee success (as all patches are subject to scrutiny for content and style), but not following them will almost always prevent it.

    Other excellent descriptions of how to create patches properly are:




  • Documentation/stable_api_nonsense.txt

    This file describes the rationale behind the conscious decision to not have a stable API within the kernel, including things like:

    • Subsystem shim-layers (for compatibility?)

    • Driver portability between Operating Systems.

    • Mitigating rapid change within the kernel source tree (or preventing rapid change)


    This document is crucial for understanding the Linux development philosophy and is very important for people moving to Linux from development on other Operating Systems.



  • Documentation/SecurityBugs

    If you feel you have found a security problem in the Linux kernel, please follow the steps in this document to help notify the kernel developers, and help solve the issue.



  • Documentation/ManagementStyle

    This document describes how Linux kernel maintainers operate and the shared ethos behind their methodologies. This is important reading for anyone new to kernel development (or anyone simply curious about it), as it resolves a lot of common misconceptions and confusion about the unique behavior of kernel maintainers.



  • Documentation/stable_kernel_rules.txt

    This file describes the rules on how the stable kernel releases happen, and what to do if you want to get a change into one of these releases.



  • Documentation/kernel-docs.txt

    A list of external documentation that pertains to kernel development. Please consult this list if you do not find what you are looking for within the in-kernel documentation.



  • Documentation/applying-patches.txt

    A good introduction describing exactly what a patch is and how to apply it to the different development branches of the kernel.



The kernel also has a large number of documents that can be automatically generated from the source code itself. This includes a full description of the in-kernel API, and rules on how to handle locking properly. The documents will be created in the Documentation/DocBook/ directory and can be generated as PDF, Postscript, HTML, and man pages by running:
make pdfdocs
make psdocs
make htmldocs
make mandocs

respectively from the main kernel source directory.

Becoming A Kernel Developer


If you do not know anything about Linux kernel development, you should look at the Linux KernelNewbies project: http://kernelnewbies.org It consists of a helpful mailing list where you can ask almost any type of basic kernel development question (make sure to search the archives first, before asking something that has already been answered in the past.) It also has an IRC channel that you can use to ask questions in real-time, and a lot of helpful documentation that is useful for learning about Linux kernel development.

The website has basic information about code organization, subsystems, and current projects (both in-tree and out-of-tree). It also describes some basic logistical information, like how to compile a kernel and apply a patch.

If you do not know where you want to start, but you want to look for some task to start doing to join into the kernel development community, go to the Linux Kernel Janitor's project: http://janitor.kernelnewbies.org/ It is a great place to start. It describes a list of relatively simple problems that need to be cleaned up and fixed within the Linux kernel source tree. Working with the developers in charge of this project, you will learn the basics of getting your patch into the Linux kernel tree, and possibly be pointed in the direction of what to go work on next, if you do not already have an idea.

If you already have a chunk of code that you want to put into the kernel tree, but need some help getting it in the proper form, the kernel-mentors project was created to help you out with this. It is a mailing list, and can be found at: http://selenic.com/mailman/listinfo/kernel-mentors

Before making any actual modifications to the Linux kernel code, it is imperative to understand how the code in question works. For this purpose, nothing is better than reading through it directly (most tricky bits are commented well), perhaps even with the help of specialized tools. One such tool that is particularly recommended is the Linux Cross-Reference project, which is able to present source code in a self-referential, indexed webpage format. An excellent up-to-date repository of the kernel code may be found at: http://sosdg.org/~coywolf/lxr/

The development process


Linux kernel development process currently consists of a few different main kernel "branches" and lots of different subsystem-specific kernel branches. These different branches are:

  • main 2.6.x kernel tree

  • 2.6.x.y -stable kernel tree

  • 2.6.x -git kernel patches

  • 2.6.x -mm kernel patches

  • subsystem specific kernel trees and patches


2.6.x kernel tree


2.6.x kernels are maintained by Linus Torvalds, and can be found on kernel.org in the pub/linux/kernel/v2.6/ directory. Its development process is as follows:

  • As soon as a new kernel is released a two weeks window is open, during this period of time maintainers can submit big diffs to Linus, usually the patches that have already been included in the -mm kernel for a few weeks. The preferred way to submit big changes is using git (the kernel's source management tool, more information can be found at http://git.or.cz/) but plain patches are also just fine.

  • After two weeks a -rc1 kernel is released it is now possible to push only patches that do not include new features that could affect the stability of the whole kernel. Please note that a whole new driver (or filesystem) might be accepted after -rc1 because there is no risk of causing regressions with such a change as long as the change is self-contained and does not affect areas outside of the code that is being added. git can be used to send patches to Linus after -rc1 is released, but the patches need to also be sent to a public mailing list for review.

  • A new -rc is released whenever Linus deems the current git tree to be in a reasonably sane state adequate for testing. The goal is to release a new -rc kernel every week.

  • Process continues until the kernel is considered "ready", the process should last around 6 weeks.


It is worth mentioning what Andrew Morton wrote on the linux-kernel mailing list about kernel releases:
"Nobody knows when a kernel will be released, because it's released according to perceived bug status, not according to a preconceived timeline."

2.6.x.y -stable kernel tree


Kernels with 4 digit versions are -stable kernels. They contain relatively small and critical fixes for security problems or significant regressions discovered in a given 2.6.x kernel.

This is the recommended branch for users who want the most recent stable kernel and are not interested in helping test development/experimental versions.

If no 2.6.x.y kernel is available, then the highest numbered 2.6.x kernel is the current stable kernel.

2.6.x.y are maintained by the "stable" team [email protected], and are released almost every other week.

The file Documentation/stablekernelrules.txt in the kernel tree documents what kinds of changes are acceptable for the -stable tree, and how the release process works.

2.6.x -git patches


These are daily snapshots of Linus' kernel tree which are managed in a git repository (hence the name.) These patches are usually released daily and represent the current state of Linus' tree. They are more experimental than -rc kernels since they are generated automatically without even a cursory glance to see if they are sane.

2.6.x -mm kernel patches


These are experimental kernel patches released by Andrew Morton. Andrew takes all of the different subsystem kernel trees and patches and mushes them together, along with a lot of patches that have been plucked from the linux-kernel mailing list. This tree serves as a proving ground for new features and patches. Once a patch has proved its worth in -mm for a while Andrew or the subsystem maintainer pushes it on to Linus for inclusion in mainline.

It is heavily encouraged that all new patches get tested in the -mm tree before they are sent to Linus for inclusion in the main kernel tree.

These kernels are not appropriate for use on systems that are supposed to be stable and they are more risky to run than any of the other branches.

If you wish to help out with the kernel development process, please test and use these kernel releases and provide feedback to the linux-kernel mailing list if you have any problems, and if everything works properly.

In addition to all the other experimental patches, these kernels usually also contain any changes in the mainline -git kernels available at the time of release.

The -mm kernels are not released on a fixed schedule, but usually a few -mm kernels are released in between each -rc kernel (1 to 3 is common).

Subsystem Specific kernel trees and patches


A number of the different kernel subsystem developers expose their development trees so that others can see what is happening in the different areas of the kernel. These trees are pulled into the -mm kernel releases as described above.

Here is a list of some of the different kernel trees available:

git trees:

  • Kbuild development tree, Sam Ravnborg [email protected] kernel.org:/pub/scm/linux/kernel/git/sam/kbuild.git

  • ACPI development tree, Len Brown [email protected] kernel.org:/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6.git

  • Block development tree, Jens Axboe [email protected] kernel.org:/pub/scm/linux/kernel/git/axboe/linux-2.6-block.git

  • DRM development tree, Dave Airlie [email protected] kernel.org:/pub/scm/linux/kernel/git/airlied/drm-2.6.git

  • ia64 development tree, Tony Luck [email protected] kernel.org:/pub/scm/linux/kernel/git/aegl/linux-2.6.git

  • ieee1394 development tree, Jody McIntyre [email protected] kernel.org:/pub/scm/linux/kernel/git/scjody/ieee1394.git

  • infiniband, Roland Dreier [email protected] kernel.org:/pub/scm/linux/kernel/git/roland/infiniband.git

  • libata, Jeff Garzik [email protected] kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev.git

  • network drivers, Jeff Garzik [email protected] kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6.git

  • pcmcia, Dominik Brodowski [email protected] kernel.org:/pub/scm/linux/kernel/git/brodo/pcmcia-2.6.git

  • SCSI, James Bottomley [email protected] kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6.git


Other git kernel trees can be found listed at http://kernel.org/git

quilt trees:

  • USB, PCI, Driver Core, and I2C, Greg Kroah-Hartman [email protected] kernel.org/pub/linux/kernel/people/gregkh/gregkh-2.6/


Bug Reporting


is where the Linux kernel developers track kernel bugs. Users are encouraged to report all bugs that they find in this tool. For details on how to use the kernel bugzilla, please see: http://test.kernel.org/bugzilla/faq.html

The file REPORTING-BUGS in the main kernel source directory has a good template for how to report a possible kernel bug, and details what kind of information is needed by the kernel developers to help track down the problem.

Mailing lists


As some of the above documents describe, the majority of the core kernel developers participate on the Linux Kernel Mailing list. Details on how to subscribe and unsubscribe from the list can be found at: http://vger.kernel.org/vger-lists.html#linux-kernel There are archives of the mailing list on the web in many different places. Use a search engine to find these archives. For example: http://dir.gmane.org/gmane.linux.kernel It is highly recommended that you search the archives about the topic you want to bring up, before you post it to the list. A lot of things already discussed in detail are only recorded at the mailing list archives.

Most of the individual kernel subsystems also have their own separate mailing list where they do their development efforts. See the MAINTAINERS file for a list of what these lists are for the different groups.

Many of the lists are hosted on kernel.org. Information on them can be found at: http://vger.kernel.org/vger-lists.html

Please remember to follow good behavioral habits when using the lists. Though a bit cheesy, the following URL has some simple guidelines for interacting with the list (or any list): http://www.albion.com/netiquette/

If multiple people respond to your mail, the CC: list of recipients may get pretty large. Don't remove anybody from the CC: list without a good reason, or don't reply only to the list address. Get used to receiving the mail twice, one from the sender and the one from the list, and don't try to tune that by adding fancy mail-headers, people will not like it.

Remember to keep the context and the attribution of your replies intact, keep the "John Kernelhacker wrote ...:" lines at the top of your reply, and add your statements between the individual quoted sections instead of writing at the top of the mail.

If you add patches to your mail, make sure they are plain readable text as stated in Documentation/SubmittingPatches. Kernel developers don't want to deal with attachments or compressed patches; they may want to comment on individual lines of your patch, which works only that way. Make sure you use a mail program that does not mangle spaces and tab characters. A good first test is to send the mail to yourself and try to apply your own patch by yourself. If that doesn't work, get your mail program fixed or change it until it works.

Above all, please remember to show respect to other subscribers.

Working with the community


The goal of the kernel community is to provide the best possible kernel there is. When you submit a patch for acceptance, it will be reviewed on its technical merits and those alone. So, what should you be expecting?

  • criticism

  • comments

  • requests for change

  • requests for justification

  • silence


Remember, this is part of getting your patch into the kernel. You have to be able to take criticism and comments about your patches, evaluate them at a technical level and either rework your patches or provide clear and concise reasoning as to why those changes should not be made. If there are no responses to your posting, wait a few days and try again, sometimes things get lost in the huge volume.

What should you not do?

  • expect your patch to be accepted without question

  • become defensive

  • ignore comments

  • resubmit the patch without making any of the requested changes


In a community that is looking for the best technical solution possible, there will always be differing opinions on how beneficial a patch is. You have to be cooperative, and willing to adapt your idea to fit within the kernel. Or at least be willing to prove your idea is worth it. Remember, being wrong is acceptable as long as you are willing to work toward a solution that is right.

It is normal that the answers to your first patch might simply be a list of a dozen things you should correct. This does not imply that your patch will not be accepted, and it is not meant against you personally. Simply correct all issues raised against your patch and resend it.

Differences between the kernel community and corporate structures


The kernel community works differently than most traditional corporate development environments. Here are a list of things that you can try to do to try to avoid problems:

Good things to say regarding your proposed changes:

  • "This solves multiple problems."

  • "This deletes 2000 lines of code."

  • "Here is a patch that explains what I am trying to describe."

  • "I tested it on 5 different architectures..."

  • "Here is a series of small patches that..."

  • "This increases performance on typical machines..."


Bad things you should avoid saying:

  • "We did it this way in AIX/ptx/Solaris, so therefore it must be good..."

  • "I've being doing this for 20 years, so..."

  • "This is required for my company to make money"

  • "This is for our Enterprise product line."

  • "Here is my 1000 page design document that describes my idea"

  • "I've been working on this for 6 months..."

  • "Here's a 5000 line patch that..."

  • "I rewrote all of the current mess, and here it is..."

  • "I have a deadline, and this patch needs to be applied now."


Another way the kernel community is different than most traditional software engineering work environments is the faceless nature of interaction. One benefit of using email and irc as the primary forms of communication is the lack of discrimination based on gender or race. The Linux kernel work environment is accepting of women and minorities because all you are is an email address. The international aspect also helps to level the playing field because you can't guess gender based on a person's name. A man may be named Andrea and a woman may be named Pat. Most women who have worked in the Linux kernel and have expressed an opinion have had positive experiences.

The language barrier can cause problems for some people who are not comfortable with English. A good grasp of the language can be needed in order to get ideas across properly on mailing lists, so it is recommended that you check your emails to make sure they make sense in English before sending them.

Break up your changes


The Linux kernel community does not gladly accept large chunks of code dropped on it all at once. The changes need to be properly introduced, discussed, and broken up into tiny, individual portions. This is almost the exact opposite of what companies are used to doing. Your proposal should also be introduced very early in the development process, so that you can receive feedback on what you are doing. It also lets the community feel that you are working with them, and not simply using them as a dumping ground for your feature. However, don't send 50 emails at one time to a mailing list, your patch series should be smaller than that almost all of the time.

The reasons for breaking things up are the following:

  • Small patches increase the likelihood that your patches will be applied, since they don't take much time or effort to verify for correctness. A 5 line patch can be applied by a maintainer with barely a second glance. However, a 500 line patch may take hours to review for correctness (the time it takes is exponentially proportional to the size of the patch, or something).

    Small patches also make it very easy to debug when something goes wrong. It's much easier to back out patches one by one than it is to dissect a very large patch after it's been applied (and broken something).

  • It's important not only to send small patches, but also to rewrite and simplify (or simply re-order) patches before submitting them.


Here is an analogy from kernel developer Al Viro:
"Think of a teacher grading homework from a math student. The teacher does not want to see the student's trials and errors before they came up with the solution. They want to see the cleanest, most elegant answer. A good student knows this, and would never submit her intermediate work before the final solution."

The same is true of kernel development. The maintainers and reviewers do not want to see the thought process behind the solution to the problem one is solving. They want to see a simple and elegant solution."

It may be challenging to keep the balance between presenting an elegant solution and working together with the community and discussing your unfinished work. Therefore it is good to get early in the process to get feedback to improve your work, but also keep your changes in small chunks that they may get already accepted, even when your whole task is not ready for inclusion now.

Also realize that it is not acceptable to send patches for inclusion that are unfinished and will be "fixed up later."

Justify your change


Along with breaking up your patches, it is very important for you to let the Linux community know why they should add this change. New features must be justified as being needed and useful.

Document your change


When sending in your patches, pay special attention to what you say in the text in your email. This information will become the ChangeLog information for the patch, and will be preserved for everyone to see for all time. It should describe the patch completely, containing:

  • why the change is necessary

  • the overall design approach in the patch

  • implementation details

  • testing results


For more details on what this should all look like, please see the ChangeLog section of the document:
"The Perfect Patch" http://www.zip.com.au/~akpm/linux/patches/stuff/tpp.txt

All of these things are sometimes very hard to do. It can take years to perfect these practices (if at all). It's a continuous process of improvement that requires a lot of patience and determination. But don't give up, it's possible. Many have done it before, and each had to start exactly where you are now.