分类目录归档:网络技术

discuz x3系列帖子进行手动分表

建议post单表超过10G的,考虑分表处理。

以下操作可以在不中断论坛访问的情况下进行,但是由于资源使用的问题,严重建议备份数据库并且关闭论坛才进行

1. 创建帖子分表,如

CREATE TABLE `pre_forum_post_2` (
`pid` int(10) unsigned NOT NULL,
`fid` mediumint(8) unsigned NOT NULL DEFAULT '0',
`tid` int(10) NOT NULL DEFAULT '0',
`first` tinyint(1) NOT NULL DEFAULT '0',
`author` varchar(15) NOT NULL DEFAULT '',
`authorid` mediumint(8) unsigned NOT NULL DEFAULT '0',
`subject` varchar(80) NOT NULL DEFAULT '',
`dateline` int(10) unsigned NOT NULL DEFAULT '0',
`message` mediumtext NOT NULL,
`useip` varchar(15) NOT NULL DEFAULT '',
`port` smallint(6) unsigned NOT NULL DEFAULT '0',
`invisible` tinyint(1) NOT NULL DEFAULT '0',
`anonymous` tinyint(1) NOT NULL DEFAULT '0',
`usesig` tinyint(1) NOT NULL DEFAULT '0',
`htmlon` tinyint(1) NOT NULL DEFAULT '0',
`bbcodeoff` tinyint(1) NOT NULL DEFAULT '0',
`smileyoff` tinyint(1) NOT NULL DEFAULT '0',
`parseurloff` tinyint(1) NOT NULL DEFAULT '0',
`attachment` tinyint(1) NOT NULL DEFAULT '0',
`rate` smallint(6) NOT NULL DEFAULT '0',
`ratetimes` tinyint(3) unsigned NOT NULL DEFAULT '0',
`status` int(10) NOT NULL DEFAULT '0',
`tags` varchar(255) NOT NULL DEFAULT '0',
`comment` tinyint(1) NOT NULL DEFAULT '0',
`replycredit` int(10) NOT NULL DEFAULT '0',
`position` int(8) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`tid`,`position`),
UNIQUE KEY `pid` (`pid`),
KEY `fid` (`fid`),
KEY `authorid` (`authorid`,`invisible`),
KEY `dateline` (`dateline`),
KEY `invisible` (`invisible`),
KEY `displayorder` (`tid`,`invisible`,`dateline`),
KEY `first` (`tid`,`first`)
) ENGINE=MyISAM DEFAULT CHARSET=gbk;

2. 把帖子从主表拷贝过来
根据经验,大概1000W帖子在一般的存储里面都是能支撑的,结合自身论坛的状况,回复要是不多,估计要200W个主题就会1000W帖子
确定了就开始转移
INSERT INTO pre_forum_post_2 SELECT * FROM pre_forum_post WHERE tid<2000000;

3. 更新thread表记录
update pre_forum_thread set posttableid=2 where tid in (select distinct tid from pre_forum_post_2 where first=1 );

4. 删除主表重复数据
DELETE FROM pre_forum_post WHERE tid in (select distinct tid from pre_forum_post_2);

如果需要划分多个表,重复以上操作即可。

误删ibdata1,通过ibd找回丢失的数据。

事件起因:昨晚凌晨对落伍数据库主从服务器进行常规维护的时候,因为鼠标不好用,误删了mysql下的ibdata1。因为落伍主题表thread用的是innodb数据存储,瞬间丢数据了。
面临两个选择连:
1.恢复定时备份的数据,但是因为数据不是实时的,会丢失部分帖子
2.从ibd里面找回数据。

考虑了一下还是执行第二个选项,时值半夜,有足够时间来。
原理很简单,INNODB的数据存储在ibd文件里面,只要建立空白的新表,把这空白的ibd文件跟表的关联去掉,然后复制有数据的ibd文件回来,在建立关联即可。
马上动手了:
先建立一个空白数据库来操作,导入表结构,落伍的主题表做了存档和partition的,得分两步了。
mysql aaa < pre_forum_thread.sql mysql aaa < pre_forum_thread_1.sql

好,把ibd文件的关联去掉吧
mysql> use aaa
Database changed
mysql> alter table pre_forum_thread discard tablespace;
ERROR 1031 (HY000): Table storage engine for 'pre_forum_thread' doesn't have this option
mysql>

在空白的表上都能出错,Google一下,发现没有相关解析。这下傻眼了。考虑了很久,我估计是因为分区的问题,分表之后会有多个ibd文件,这个discard tablespace的操作可能无法识别全部ibd文件。
/var/lib/mysql/im286/pre_forum_thread#P#p3.ibd
/var/lib/mysql/im286/pre_forum_thread#P#p4.ibd
/var/lib/mysql/im286/pre_forum_thread#P#p5.ibd
/var/lib/mysql/im286/pre_forum_thread#P#p6.ibd
/var/lib/mysql/im286/pre_forum_thread#P#p7.ibd
/var/lib/mysql/im286/pre_forum_thread#P#p8.ibd

那就试试没分区的原始thread表看看。果然在没分区的时候,成功discard tablespace了
mysql> use aaa
Database changed
mysql> alter table pre_forum_thread discard tablespace;
Query OK, 0 rows affected (0.10 sec)

既然不分区的能用,那就简单了。把分区ibd文件当做没分区的来import即可,反正表结构是一致的,import之后导出表里面的数据。
mysql> use aaa
Database changed
mysql> alter table pre_forum_thread discard tablespace;
Query OK, 0 rows affected (0.10 sec)

mysql> quit

cp -p /var/lib/mysql/im286/bak/pre_forum_thread#P#p3.ibd /var/lib/mysql/im286/aaa/pre_forum_thread.ibd

mysql> use aaa
Database changed
mysql> alter table pre_forum_thread import tablespace;
Query OK, 0 rows affected, 1 warning (11.57 sec)


没问题了,导出数据:
mysqldump -d aaa pre_forum_thread >> thread.sql

所有分区的ibd文件都重复一次上面的操作,全部主题数据都导出来了,回到论坛的数据库,创建空白的thread表,然后导入数据即可。大功告成,睡觉。

Linux shell快速查找PHP木马

# find ./ -name "*.php" |xargs egrep "phpspy|c99sh|milw0rm|eval\(gunerpress|eval\(base64_decoolcode|spider_bc"> /tmp/php.txt

# grep -r --include=*.php '[^a-z]eval($_POST' . > /tmp/eval.txt

# grep -r --include=*.php 'file_put_contents(.*$_POST\[.*\]);' . > /tmp/file_put_contents.txt

# find ./ -name "*.php" -type f -print0 | xargs -0 egrep "(phpspy|c99sh|milw0rm|eval\(gzuncompress\(base64_decoolcode|eval\(base64_decoolcode|spider_bc|gzinflate)" | awk -F: '{print $1}' | sort | uniq

查找最近一天被修改的PHP文件

# find -mtime -1 -type f -name \*.php
修改网站的权限

# find -type f -name \*.php -exec chmod 444 {} \;

# find ./ -type d -exec chmod 555{} \;

Nginx 安装pagespeed为网页加速

PageSpeed是一个由谷歌开发的网站服务器模块https://developers.google.com/speed/pagespeed/module PageSpeed旨在缩短网页加载的时间,减少网站服务器的带宽使用量。Google官方介绍通常情况下PageSpeed大约能帮助节省25%-37%的带宽,当然这个得看是什么内容,有些内容是不能压缩的。

PageSpeed的功能特性:
PageSpeed模块可以使用数量众多的重写"过滤器",每个过滤器都可以选择性地开启/关闭,从而自动进行各种优化(比如,减小文档大小、减少HTTP请求数据、减少HTTP往返次数以及缩短DNS解析时间)。
下面是ngx_pagespeed支持的其中一些过滤器。想了解支持的全部过滤器,请参阅官方文档。
Collapse Whitespace(压缩空白):通过把HTML网页中的多处连续空白换成一处空白,减少带宽使用量。
Canonicalize JavaScript Libraries(规范化转换JavaScript库):通过自动把流行的JavaScript库换成免费托管的JavaScript库(比如由谷歌托管),减少带宽使用量。
Combine CSS(合并CSS):通过把多个CSS文件合并成一个CSS文件,减少HTTP请求数量。
Combine JavaScript(合并JavaScript):通过把多个JavaScript文件合并成一个JavaScript文件,减少HTTP请求数量。
Elide Attributes(省略属性):通过删除由默认属性指定的标签,缩小文档大小。
Extend Cache(扩展缓存):通过优化网页资源的可缓存性,减少带宽使用量。
Flatten CSS Imports(精简CSS导入):通过删除CSS文件中的@import,减少HTTP请求往返次数。
Lazyload Images(延时加载图片):延时加载在客户端浏览器上看不见的图片。
Minify JavaScript(缩小JavaScript):通过缩小JavaScript,减少带宽使用量。
Optimize Images(优化图片):通过引入更多的内嵌图片、压缩图片,或者将GIF图片转换成PNG图片,优化图片分发。
Pre-Resolve DNS(预解析DNS):通过预解析DNS,缩短DNS解析时间。
Prioritize Critical CSS(优化加载关键CSS规则):重写CSS文件,以便首先加载渲染页面的CSS规则。
ngx_pagespeed模块并未内置在随主要Linux发行版(比如Fedora 19)发布的Nginx程序包中。因而,想使用Nginx中的PageSpeed,你需要利用源代码来构建Nginx。

cd /home/src
NPS_VERSION=1.9.32.6
wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip
unzip release-${NPS_VERSION}-beta.zip
cd ngx_pagespeed-release-${NPS_VERSION}-beta/
wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz
tar -xzvf ${NPS_VERSION}.tar.gz # extracts to psol/

cd /home/src
wget http://nginx.org/download/nginx-1.9.3.tar.gz
tar -xvzf nginx-1.9.3.tar.gz
cd nginx-1.9.3
./configure --add-module=../ngx_pagespeed-release-${NPS_VERSION}-beta
make
sudo make install

pagespeed可以全局设置,写到nginx.conf的http段里面,也可以写到server段甚至是location里面,非常灵活

通常作为论坛的配置:
pagespeed On;
pagespeed FileCachePath "/dev/shm/";
pagespeed Disallow "*.html";##做了rewite的论坛必须启用,否则会段时间内产生大量缓存文件
pagespeed Disallow "*.php*";
pagespeed RewriteLevel OptimizeForBandwidth;
pagespeed EnableFilters combine_css,combine_javascript,canonicalize_javascript_libraries,collapse_whitespace,convert_meta_tags,dedup_inlined_images,flatten_css_imports,inline_import_to_link,inline_css,inline_javascript,rewrite_javascript,remove_comments,rewrite_css,rewrite_images,convert_gif_to_png,recompress_png,convert_jpeg_to_progressive,strip_image_color_profile,strip_image_meta_data,insert_image_dimensions,extend_cache,move_css_to_head,sprite_images;

更多具体配置请查看官方文档:
https://developers.google.com/speed/pagespeed/module/configuration

Linux查找文件内容的常用命令方法。

从文件内容查找匹配指定字符串的行:
$ grep "被查找的字符串" 文件名
例子:在当前目录里第一级文件夹中寻找包含指定字符串的.in文件
grep "thermcontact" */*.in

从文件内容查找与正则表达式匹配的行:
$ grep –e “正则表达式” 文件名

查找时不区分大小写:
$ grep –i "被查找的字符串" 文件名

查找匹配的行数:
$ grep -c "被查找的字符串" 文件名

从文件内容查找不匹配指定字符串的行:
$ grep –v "被查找的字符串" 文件名

从根目录开始查找所有扩展名为.log的文本文件,并找出包含”ERROR”的行
find / -type f -name "*.log" | xargs grep "ERROR"
例子:从当前目录开始查找所有扩展名为.in的文本文件,并找出包含”thermcontact”的行
find . -name "*.in" | xargs grep "thermcontact"
查找系统里面所有php文件,内容包含"thermcontact"的行
locate *.php | xargs grep "thermcontact"

注册一个COM域名到底要多少成本

这次讨论的是作为注册商交给ICANN以及注册局Verisign的成本,并非你找代理商30一年的成本。

要成为注册商,ICANN首先要收取$3,500.00 USD的申请审核费用,一次性付款,不可退款
固定年费$4,000.00 USD
ICANN要求企业购买商业保险以防你的企业破产导致用户遭受损失,保额至少为$500000USD,这部分费用大约$1000USD/年(可能会稍微少点)
ICANN要求企业的注册资本至少为$70000USD,不过这部分钱不需要交给ICANN,仅仅提供资产证明即可。
季度性ICANN各种费用开支:如果注册量比较小,费用大约在$1000USD/季度,随着注册量增加,这部分费用也会增加。
Verisign对每个COM域名收取的费用$7.85
ICANN 对每个COM域名收取的费用 $0.18

总结下来,注册商每年的费用至少为$9000USD
目前美元对人民币汇率在6.2左右,兑换下来,注册上每年的固定支出必须超过55800人民币
每个域名成本为$8.03USD,成本大约为 RMB50左右,也就是说,注册商给代理商的最低价格,必须高于RMB50,这样注册商才能保住不亏本。
但是保本实际还是亏了,因为每年的固定支出,还要企业运营开支,都是要烧钱的。

我们姑且按照注册商给代理的价格为55RMB来算,每个域名五块钱利润,那么每年的域名持有量必须超过1万才能保住血本。刚刚好扯平了ICANN和Verisign的缺口。
至于企业的运营支出,还有货币兑换损失以及跟注册局打交道的费用,我没干过就不清楚了。
如果按五块钱利润一个域名,100万持有量,那一年的利润是500万,除去运营开支,看来所剩无几。

然而实际上,目前国内注册商过多造成竞争过于激烈,大部分注册商对域名转入以及注册首年,都是选择亏本性倾销。29/39的都有。也就是说,即使有100W的持有量,取长补短之后,恐怕还是没有任何的利润 。
但是大部分注册商并不担心亏不亏本的问题,因为烧的都是投资者的钱,经营者只要做大给投资者看就行。就像京东,只要做大就行,刘强东从来不担心不盈利,他把供货商的货款压三个月拿去做投资就行。说到底玩的是资本游戏,这个我不懂,大家吐槽吧。

使用time+dd测试硬盘读写速度

写速度:

time dd if=/dev/zero of=test.db bs=8k count=300000
其中/dev/zero是一个伪设备,它只产生空字符流,对它不会产生IO,所以,IO都会集中在of文件中,of文件只用于写,所以这个命令相当于测试磁盘的写能力。

输出的结果类似(因为一般更长测试时间更准确,所以可以设置count大一些):
300000+0 records in
300000+0 records out

real 0m36.669s
user 0m0.185s
sys 0m9.340s

写速度为:8*300000/1024/36.669=63.916M/s

读速度:

time dd if=/dev/sda1 of=/dev/null bs=8k
因为/dev/sdb1是一个物理分区,对它的读取会产生IO,/dev/null是伪设备,相当于黑洞,of到该设备不会产生IO,所以,这个命令的IO只发生在/dev/sdb1上,也相当于测试磁盘的读能力

输出的结果类似:
448494+0 records in
448494+0 records out

real 0m51.070s
user 0m0.054s
sys 0m10.028s

sda1上的读取速度为:8*448494/1024/51.070=68.61M/s

[转]ext3文件系统超级块损坏修复

超级块:从磁盘上读出来的第一块信息就是超级块(superblock),它记录了磁盘的几何尺寸,可用空间容量最重要的是记录了第一个inode位置,就是"/ ",ext2/3文件存取都是通过inode定位的,比如使用/home/blue/test这个文件,首先先找到/的inode,然后找到 /home的inode,再找到blue的inode最后找到test的inode,可见没有超级块,文件系统就没有使用意义。来看看ext2/3文件系统的结构图:可以看出,ext2/3文件系统是由许多的块组组成,在其他的块组中保存了超级块的复本通常只有块组0的超级块会被程序读取,比如mount,e2fsck 默认就只读取块组0的。如果超级块被写上其他的数据,比如被mkswap后,超级块保存的信息就丢失了mount,fsck就会报告超级块损坏,无法正常挂载系统了,鉴于超级块如此重要,文件系统的设计者将这些超级块拷贝了许多份分散在整个文件系统的块组中。我们要做的就是用分散在其他块组中超级块替换已经损坏的,就万事大吉啦。首先找出超级块都被藏到哪去了 注意: -n 参数表示只列出文件系统的信息,并不真的格式化分区,使用mke2fs时一定要加倍小心不然真给格了一定要小心。 mke2fs -n /dev/hda10
mke2fs 1.35 (28-Feb-2004)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)883008 inodes, 1763125 blocks88156 blocks (5.00%) reserved for the super userFirst data block=054 block groups32768 blocks per group, 32768 fragments per group16352 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
原来藏在32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632下面就使用e2fsck命令修复。
e2fsck -b 32768 /dev/hda10 -b参数指定超级块位置,不使用默认的超级块,默认的都坏掉了。
按照提示,一路y后。

mount /dev/sde1 /foo
mount: wrong fs type, bad option, bad superblock on
/dev/sde1,
or too many mounted file
systems
的问题。请问这种问题怎么能解决呢?
回答:
这个错误信息标识 /dev/sde1 设备上的 ext3
文件系统的超级块损坏了,ext3 文件系统的元数据保存在超级块中。
ext3 文件系统还有一些备份的超级块,可以尝试用备份的超级块加载 ext3
文件系统和修复 ext3 文件系统。
备份的超级块信息可以通过以下命令获得,这个命令模拟 ext3
文件系统创建时的动作并打印出备份超级块的位置,给出的位置缺省是以4k为单位的,mount
在使用时需要为它提供以1k为单位的偏移,需要乘4:
注意!!!!!一定要使用\"-n\"作为参数模拟 ext3 文件系统的创建而不是真的创建 ext3
文件系统
# mkfs.ext3 -n /dev/hda7
mke2fs 1.38 (30-Jun-2005)
Filesystem
label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096
(log=2)
2198880 inodes, 4393738 blocks
219686 blocks (5.00%) reserved for
the super user
First data block=0
135 block groups
32768 blocks per
group, 32768 fragments per group
16288 inodes per group
Superblock backups
stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200,
884736, 1605632, 2654208,
4096000
使用备份的超级块来加载 ext3
文件系统的命令:
语法: mount.ext3 -o sb=n
#
mount.ext3 -o sb=131072 /dev/hda7 /media/hda7
使用备份的超级块来修复 ext3
文件系统的命令
语法:fsck.ext3 -b superblock
# fsck.ext3 -b 32768
/dev/hda7

-----------------------------------------

1,一服务器/dev/sdb文件系统损坏,重启后系统启不来,进入单用户模式fsck无法修复,把/etc/fstab中的/dev/sdb注释掉后,系统成功起来。

2,mount挂载/dev/sdb提示bad superblock 错误,无法正常挂载:
[root@localhost ~]# mount /dev/sdb /test
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
or too many mounted file systems

3,尝试直接指定备用超级块来挂载分区。通过mkfs -n可以查看到备份超级块的位置:
[root@localhost ~]# mkfs.ext3 -n /dev/sdb
mke2fs 1.35 (28-Feb-2004)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
365854720 inodes, 731688960 blocks
36584448 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
22330 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
也可以通过dumpe2fs查看备份超级块的位置:
[root@localhost ~]# dumpe2fs /dev/sdb | grep –before-context=1 superblock
dumpe2fs 1.35 (28-Feb-2004)
Group 0: (Blocks 0-32767)
Primary superblock at 0, Group descriptors at 1-175

Group 1: (Blocks 32768-65535)
Backup superblock at 32768, Group descriptors at 32769-32943

Group 3: (Blocks 98304-131071)
Backup superblock at 98304, Group descriptors at 98305-98479

Group 5: (Blocks 163840-196607)
Backup superblock at 163840, Group descriptors at 163841-164015

Group 7: (Blocks 229376-262143)
Backup superblock at 229376, Group descriptors at 229377-229551

Group 9: (Blocks 294912-327679)
Backup superblock at 294912, Group descriptors at 294913-295087

Group 25: (Blocks 819200-851967)
Backup superblock at 819200, Group descriptors at 819201-819375

Group 27: (Blocks 884736-917503)
Backup superblock at 884736, Group descriptors at 884737-884911

Group 49: (Blocks 1605632-1638399)
Backup superblock at 1605632, Group descriptors at 1605633-1605807

Group 81: (Blocks 2654208-2686975)
Backup superblock at 2654208, Group descriptors at 2654209-2654383

Group 125: (Blocks 4096000-4128767)
Backup superblock at 4096000, Group descriptors at 4096001-4096175

Group 243: (Blocks 7962624-7995391)
Backup superblock at 7962624, Group descriptors at 7962625-7962799

4,用查看到的备份超级块来挂载/dev/sdb,也无法成功:
[root@localhost ~]# mount -o sb=32768 /dev/sdb /test
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
or too many mounted file systems
分区情况如下:
[root@localhost ~]# fdisk -l
Disk /dev/sda: 998.9 GB, 998999326720 bytes
255 heads, 63 sectors/track, 121454 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 6390 51199155 83 Linux
/dev/sda3 6391 12764 51199155 83 Linux
/dev/sda4 12765 121454 873052425 5 Extended
/dev/sda5 12765 13796 8289508+ 82 Linux swap
/dev/sda6 13797 121454 864762853+ 83 Linux

Disk /dev/sdb: 2996.9 GB, 2996997980160 bytes
255 heads, 63 sectors/track, 364364 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn’t contain a valid partition table

[root@localhost ~]# parted /dev/sdb
GNU Parted 1.6.19
Copyright (C) 1998 - 2004 Free Software Foundation, Inc.
This program is free software, covered by the GNU General Public License.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

Using /dev/sdb
(parted) print
Disk geometry for /dev/sdb: 0.000-2858160.000 megabytes
Disk label type: loop
Minor Start End Filesystem Flags
1 0.000 2858160.000 ext3
(parted) quit
Information: Don’t forget to update /etc/fstab, if necessary.

5,尝试用备份超级块去fsck修复分区,报“illegal bitmap block”错误,也无法成功:
[root@localhost ~]# fsck.ext3 -b 32768 /dev/sdb
e2fsck 1.35 (28-Feb-2004)
Block bitmap for group 4992 is not in group. (block 809140608)
Relocate? yes

Inode bitmap for group 4992 is not in group. (block 5385)
Relocate? yes

Inode table for group 4992 is not in group. (block 1295485238)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? yes

Block bitmap for group 4993 is not in group. (block 0)
Relocate? yes

Inode bitmap for group 4993 is not in group. (block 0)
Relocate? yes

Inode bitmap for group 21631 is not in group. (block 171349112)
Relocate? cancelled!

Inode table for group 21631 is not in group. (block 0)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? cancelled!

fsck.ext3: e2fsck_read_bitmaps: illegal bitmap block(s) for /dev/sdb

[root@localhost ~]# fsck.ext3 -b 163840 /dev/sdb
e2fsck 1.35 (28-Feb-2004)
Block bitmap for group 4992 is not in group. (block 809140608)
Relocate? yes

Inode bitmap for group 4992 is not in group. (block 5385)
Relocate? cancelled!

Inode table for group 4992 is not in group. (block 1295485238)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? cancelled!

Inode bitmap for group 21631 is not in group. (block 171349112)
Relocate? cancelled!

Inode table for group 21631 is not in group. (block 0)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? cancelled!

fsck.ext3: e2fsck_read_bitmaps: illegal bitmap block(s) for /dev/sdb

6,直接fsck -y修复分区,也是报一样的错误:
[root@localhost ~]# fsck.ext3 -y /dev/sdb
e2fsck 1.35 (28-Feb-2004)
Group descriptors look bad… trying backup blocks…
Block bitmap for group 4992 is not in group. (block 809140608)
Relocate? yes

Inode bitmap for group 4992 is not in group. (block 5385)
Relocate? yes

Inode table for group 4992 is not in group. (block 1295485238)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? yes

Block bitmap for group 4993 is not in group. (block 0)
Relocate? yes

Inode bitmap for group 4993 is not in group. (block 0)
Relocate? yes

Inode table for group 4993 is not in group. (block 567580784)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? yes

Block bitmap for group 4994 is not in group. (block 0)
Relocate? yes

Block bitmap for group 21630 is not in group. (block 0)
Relocate? yes

Inode bitmap for group 21630 is not in group. (block 0)
Relocate? yes

Inode table for group 21630 is not in group. (block 0)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? yes

Block bitmap for group 21631 is not in group. (block 0)
Relocate? yes

Inode bitmap for group 21631 is not in group. (block 171349112)
Relocate? yes

Inode table for group 21631 is not in group. (block 0)
WARNING: SEVERE DATA LOSS POSSIBLE.
Relocate? yes

fsck.ext3: e2fsck_read_bitmaps: illegal bitmap block(s) for /dev/sdb

7,终极大法,用mke2fs -S重新生成超级块(注:此为实在没办法才执行的,不成功的话则全部数据会有丢失的可能),修复成功了!!
[root@localhost ~]# mke2fs -S /dev/sdb
mke2fs 1.35 (28-Feb-2004)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
365854720 inodes, 731688960 blocks
36584448 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
22330 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

8,终于可以将分区挂载上了:
[root@localhost ~]# mount /dev/sdb /test

9,用tune2fs -l查看/dev/sdb,发现状态为:not clean with errors,浏览文件发现其中还是有很多文件是损坏的:
[root@localhost ~]# tune2fs -l /dev/sdb
tune2fs 1.35 (28-Feb-2004)
Filesystem volume name:
Last mounted on:
Filesystem UUID: 2f3e8c46-64c0-4346-b7f6-edcfd457617a
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: resize_inode filetype sparse_super
Default mount options: (none)
Filesystem state: not clean with errors
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 365854720
Block count: 731688960
Reserved block count: 36584448
Free blocks: 720188790
Free inodes: 365854720
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 849
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Fri Feb 18 14:37:14 2011
Last mount time: Fri Feb 18 14:37:33 2011
Last write time: Fri Feb 18 14:37:33 2011
Mount count: 1
Maximum mount count: 39
Last checked: Fri Feb 18 14:37:14 2011
Check interval: 15552000 (6 months)
Next check after: Wed Aug 17 14:37:14 2011
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Default directory hash: tea
Directory Hash Seed: 72bc7ce8-d9db-40c4-8ee8-85e169dd4bc5

10,只有再用fsck修复下/dev/sdb,分区2T多,修复了9个小时:
[root@localhost ~]# fsck.ext3 -y /dev/sdb
e2fsck 1.35 (28-Feb-2004)
/dev/sdb contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Journal inode is not in use, but contains data. Clear? yes

Inode 177963137 is in use, but has dtime set. Fix? yes

Inode 177963137 has imagic flag set. Clear? yes

Inode 177963147 is in use, but has dtime set. Fix? yes

Inode 177963147 has imagic flag set. Clear? yes

Inode 177963155 is in use, but has dtime set. Fix? yes

Inode 177963155 has imagic flag set. Clear? yes

Inode 177963156 is in use, but has dtime set. Fix? yes

Inode 177963156, i_blocks is 4294967295, should be 0. Fix? yes

Inode 177963155 has compression flag set on filesystem without compression support. Clear? yes

Inode 177963155 has INDEX_FL flag set but is not a directory.

11,修复完成后,/dev/sdb分区能够正常使用,此时发现文件系统变成ext2的了:
[root@localhost ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda3 ext3 49G 36G 11G 78% /
/dev/sda6 ext3 812G 750G 22G 98% /bk
/dev/sda1 ext3 122M 13M 103M 12% /boot
none tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/sda2 ext3 49G 3.2G 43G 7% /opt
/dev/sdb ext2 2.7T 2.3T 349G 87% /test

12,用tune2fs -j把分区转成ext3:
[root@localhost ~]# tune2fs -j /dev/sdb
tune2fs 1.35 (28-Feb-2004)
Creating journal inode: done
This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

13,成功了,分区状况也正常了,分区大部份数据还在,也能正常使用了,over:
[root@localhost ~]# tune2fs -l /dev/sdb
tune2fs 1.35 (28-Feb-2004)
Filesystem volume name:
Last mounted on:
Filesystem UUID: 2f3e8c46-64c0-4346-b7f6-edcfd457617a
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal resize_inode filetype sparse_super large_file
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 365854720
Block count: 731688960
Reserved block count: 36584448
Free blocks: 120881374
Free inodes: 360051869
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 849
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Fri Feb 18 14:37:14 2011
Last mount time: Mon Mar 7 16:52:54 2011
Last write time: Tue Mar 15 11:45:42 2011
Mount count: 4
Maximum mount count: 39
Last checked: Fri Feb 18 22:44:35 2011
Check interval: 15552000 (6 months)
Next check after: Wed Aug 17 22:44:35 2011
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8404993
Default directory hash: tea
Directory Hash Seed: 72bc7ce8-d9db-40c4-8ee8-85e169dd4bc5

自己制作nginx SSL主机使用的证书

首先执行如下命令生成一个key
openssl genrsa -des3 -out ssl.key 2048
然后他会要求你输入这个key文件的密码。因为以后要给nginx使用。每次reload nginx配置时候都要你验证这个PAM密码的。
由于生成时候必须输入密码。你可以输入后 再删掉。
openssl rsa -in ssl.key -out ssl.key

然后根据这个key文件生成证书请求文件
openssl req -new -key ssl.key -out ssl.csr
以上命令生成时候要填很多东西 一个个看着写吧(可以随便,毕竟这是自己生成的证书)

最后根据这2个文件生成crt证书文件
openssl x509 -req -days 3650 -in ssl.csr -signkey ssl.key -out ssl.crt

如果需要用pfx 可以用以下命令生成
openssl pkcs12 -export -inkey ssl.key -in ssl.crt -out ssl.pfx

在需要使用证书的nginx配置文件的server节点里加入以下配置就可以了。
ssl on;
ssl_certificate /home/ssl.crt;
ssl_certificate_key /home/ssl.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2 SSLv2;
ssl_ciphers AES128+EECDH:AES128+EDH:!aNULLi:AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;
ssl_prefer_server_ciphers on;
然后重启nginx就大功告成了

Linux主机简单判断CC攻击的命令

CC攻击很容易发起,并且几乎不需要成本,导致现在的CC攻击越来越多。
大部分搞CC攻击的人,都是用在网上下载的工具,这些工具很少去伪造特征,所以会留下一些痕迹。
使用下面的命令,可以分析下是否在被CC攻击。

第一条命令:
tcpdump -s0 -A -n -i any | grep -o -E '(GET|POST|HEAD) .*'

正常的输出结果类似于这样
POST /ajax/validator.php HTTP/1.1
POST /api_redirect.php HTTP/1.1
GET /team/57085.html HTTP/1.1
POST /order/pay.php HTTP/1.1
GET /static/goodsimg/20140324/1_47.jpg HTTP/1.1
GET /static/theme/qq/css/index.css HTTP/1.1
GET /static/js/index.js HTTP/1.1
GET /static/js/customize.js HTTP/1.1
GET /ajax/loginjs.php?type=topbar& HTTP/1.1
GET /static/js/jquery.js HTTP/1.1
GET /ajax/load_team_time.php?team_id=57085 HTTP/1.1
GET /static/theme/qq/css/index.css HTTP/1.1
GET /static/js/lazyload/jquery.lazyload.min.js HTTP/1.1
GET /static/js/MSIE.PNG.js HTTP/1.1
GET /static/js/index.js HTTP/1.1
GET /static/js/customize.js HTTP/1.1
GET /ajax/loginjs.php?type=topbar& HTTP/1.1
GET /static/theme/qq/css/i/logo.jpg HTTP/1.1
GET /static/theme/qq/css/i/logos.png HTTP/1.1
GET /static/theme/qq/css/i/hot.gif HTTP/1.1
GET /static/theme/qq/css/i/brand.gif HTTP/1.1
GET /static/theme/qq/css/i/new.gif HTTP/1.1
GET /static/js/jquery.js HTTP/1.1
GET /static/theme/qq/css/i/logo.jpg HTTP/1.1
正常命令结果以静态文件为主,比如css,js,各种图片。
如果是被攻击,会出现大量固定的地址,比如攻击的是首页,会有大量的“GET / HTTP/1.1”,或者有一定特征的地址,比如攻击的如果是Discuz论坛,那么可能会出现大量的“/thread-随机数字-1-1.html”这样的地址。

 

第二条命令:
tcpdump -s0 -A -n -i any | grep  ^User-Agent
输出结果类似于下面:
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; 360space)
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; 360space)
User-Agent: Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; InfoPath.2)
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)

这个是查看客户端的useragent,正常的结果中,是各种各样的useragent。
大多数攻击使用的是固定的useragent,也就是会看到同一个useragent在刷屏。随机的useragent只见过一次,但是给搞成了类似于这样“axd5m8usy”,还是可以分辨出来。

 

 

第三条命令:
tcpdump -s0 -A -n -i any | grep ^Host
如果机器上的网站太多,可以用上面的命令找出是哪个网站在被大量请求
输出结果类似于下面这样
Host: www.server110.com
Host: www.server110.com
Host: www.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: www.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: www.server110.com
Host: www.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: www.server110.com
Host: upload.server110.com
Host: upload.server110.com
Host: www.server110.com

一般系统不会默认安装tcpdump命令
centos安装方法:yum install -y tcpdump
debian/ubuntu安装方法:apt-get install -y tcpdump

很多小白用户不懂得如何设置日志,查看日志,使用上面的命令则简单的多,复制到命令行上运行即可。


http://www.server110.com/linux_sec/201406/10670.html