`
收藏列表
标题 标签 来源
正则在线体验 正则 https://jex.im/regulex/#!embed=false&flags=&re=%5E(%5Ea%7CC%7Cb)*%3F%24

        
最强开发资源平台大合集 http://www.oschina.net/question/2285044_219206
策划设计


icon下载

Easyicon:http://www.easyicon.net/

Findicons:http://findicons.com/

淘宝icon库:http://www.iconfont.cn/ 

IconArchive:http://www.iconarchive.com/ 

Dryicons:http://dryicons.com/ 

Iconmonstr:http://iconmonstr.com/ 


图片下载

全景图:http://www.quanjing.com/ 

Pixabay:http://pixabay.com/ 

千图网:http://www.58pic.com/ 

昵图网:http://www.nipic.com/ 

Twitter Covers:http://www.twitrcovers.com/ 

韩国Photo Naver:http://photo.naver.com/ 


网页模板

CSSwinner:http://www.csswinner.com/ 

CSSawards:http://cssdesignawards.com/ 

Awwwards:http://www.awwwards.com/ 

突唯阿:http://tuweia.cn/c/home/a/discovery 

Reeoo:http://reeoo.com/ 

日本酷站索引:http://bm.straightline.jp/ 


字体设计

字体松鼠:http://www.fontsquirrel.com/ 

A5字体:http://ziti.admin5.com/ 

找字体:http://www.zhaozi.cn/ 

字体下载宝库:http://font.knowsky.com/ 

Abstract字体:http://www.abstractfonts.com/ 

Font Fabric:http://fontfabric.com/ 


logo设计

Logopond:http://logopond.com/ 

Easylogo:http://www.easylogo.cn/ 

LogoMoose:http://www.logomoose.com/ 

Logo Faves:http://logofaves.com/ 

logo狂热者:http://logofury.com/ 

Logo挚爱:http://www.logodesignlove.com/ 


综合ui

Freepik:http://www.freepik.com/ 

Codrops:http://tympanus.net/codrops/ 

站酷:http://www.zcool.com.cn/ 

Codepen:http://codepen.io/ 

Designmodo:http://designmodo.com/ 

Pixeden:http://www.pixeden.com/ 


创意灵感

Pinterest:http://www.pinterest.com/all/design/ 

9GAG:http://9gag.com/ 

Fubiz:http://www.fubiz.net/ 

花瓣:http://huaban.com/ 

视觉中国:http://shijue.me/home 

Booooooom:http://www.booooooom.com/ 

功能开发


推送

百度云推送:http://developer.baidu.com/cloud/push 

个推:http://www.igetui.com/ 

极光推送:http://www.jpush.cn/ 

友盟推送:http://www.umeng.com/push/ 

Bmob推送:http://www.codenow.cn/ 

华为推送:http://developer.huawei.com/push/ 

地图

百度地图:http://developer.baidu.com/map/index.php?title=%E9 

高德地图:http://lbs.amap.com/ 

谷歌地图:https://developers.google.com/ 

腾讯地图:http://open.map.qq.com/ 

苹果地图:https://developer.apple.com/cn/ 

搜狗地图:http://map.sogou.com/api/?IPLOC=CN4101 


社区化分享

ShareSDK:http://mob.com/ 

百度社会化分享:http://developer.baidu.com/soc/share 

友盟社会化分享:http://www.umeng.com/social 

Bshare:http://www.bshare.cn/ 


IM

环信:http://www.easemob.com/ 

融云即时通讯云:http://www.rongcloud.cn/ 

容联·云通讯:http://www.yuntongxun.com/ 

亲加即时通讯云:http://gotye.com.cn/ 

呀呀语音:http://www.yunva.com:8080/yunva/home.html 


语音识别

科大讯飞语音:http://open.voicecloud.cn/ 

微信语音:http://pr.weixin.qq.com/ 

百度语音识别:http://developer.baidu.com/wiki/index.php?title=do 

云知声:http://dev.hivoice.cn/usc.jsp 

搜狗语音云:http://openspeech.sogou.com/Sogou/php/index.php 

出门问问:http://www.mobvoi.com/ 


第三方登录

腾讯QQ互联平台:http://wiki.connect.qq.com/sdk%E4%B8%8B%E8%BD%BD 

百度第三方账号登陆:http://developer.baidu.com/frontia/sociallogin 

人人连接:http://wiki.dev.renren.com/wiki/Renren_Connect 

小米账号登陆:http://dev.xiaomi.com/doc/?page_id=1668 

输入法

百度输入法:http://developer.baidu.com/ms/input 

搜狗输入法:http://shouji.sogou.com/open/ 

FIT输入法:http://funinput.com/ 


人脸识别

Face++人脸识别:http://www.faceplusplus.com.cn/ 

ReKognition人脸识别:http://rekognition.com/ 

百度人脸识别:http://developer.baidu.com/wiki/index.php?title=do 

汉王云人脸识别:http://developer.hanvon.com/ 


支付平台

支付宝开放平台:https://openhome.alipay.com/doc/docIndex.html 

银联开放平台:http://mobile.unionpay.com/ 

360移动开放平台:http://dev.360.cn/ 

机锋支付:http://www.gfan.com/dev/gpay/ 


翻译

百度翻译:http://developer.baidu.com/ms/translate 

有道翻译:http://fanyi.youdao.com/openapi 

谷歌翻译:https://developers.google.com/translate/?hl=zh-cn 


生活服务

聚合数据:http://www.juhe.cn/ 

中国天气:http://smart.weather.com.cn/wzfw/smart/weatherapi. 

快递100:http://www.kuaidi100.com/openapi/ 

PM25in:http://www.pm25.in/ 


视频服务

优酷开放平台:http://open.youku.com/down 

迅雷云加速开放平台:http://open.xunlei.com/ 

百度云视频开放平台:http://developer.baidu.com/wiki/index.php?title=do 

爱奇艺开放平台:http://open.iqiyi.com/ 


快速开发

简网app工厂:http://www.cutt.com/app 

腾讯风铃:http://fl.qq.com/ 

Appmakr:http://appmakr.com/ 

Basic4Android:http://www.basic4ppc.com/ 

AnySDK:http://www.anysdk.com/ 

云适配:http://www.yunshipei.com/ 


社区

新浪微博开放平台:http://open.weibo.com/ 

天涯开放平台:http://open.tianya.cn/index.php 

百度贴吧SDK:http://c.tieba.baidu.com/c/s/download/pc?src=webtb 


流量创收

百度联盟:http://union.baidu.com/ 

同程网开放平台:http://union.ly.com/ 

携程网站联盟:http://open.ctrip.com/ 

淘宝联盟开放平台:http://u.alimama.com/ 


手游录像

爱拍RecNow:http://recnow.aipai.com/ 

ShareRec:http://rec.mob.com/ 

Kamcord SDK:http://www.kamcord.com/ 

EveryPlay SDK:https://developers.everyplay.com/ 

LobiRec SDK:http://qiita.com/KAMEDAkyosuke/items/ee85d8943b974 

运营服务

安全加固

爱加密:http://www.ijiami.cn/ 

Apkprotect:http://apkprotect.com/ 


统计分析

友盟应用统计分析:http://www.umeng.com/analytics 

百度移动统计:http://mtj.baidu.com/web/welcome/login 

机锋统计:http://www.gfan.com/dev/analytics/ 

友盟游戏统计分析:http://www.umeng.com/analytics_games 

云服务


云计算

华为云服务:http://www.hwclouds.com/ 

亚马逊AWS:http://www.amazonaws.cn/ 

阿里云:http://www.aliyun.com/ 

盛大云计算:http://www.grandcloud.cn/   

美团云:https://mos.meituan.com/ 

新浪云计算:http://sinacloud.com/ 


云存储

百度云数据库:http://developer.baidu.com/cloud/db 

七牛云存储:http://www.qiniu.com/ 

又拍云:https://www.upyun.com/ 

新浪微盘:http://vdisk.weibo.com/developers/ 

Bmob:http://www.codenow.cn/ 


云引擎

阿里云:http://www.aliyun.com/product/ace/ 

百度应用引擎BAE:http://developer.baidu.com/cloud/rt 

盛大云引擎:http://www.grandcloud.cn/product/ae 

火山云引擎:http://www.volit.net/ 


云测试

云测:http://www.testin.cn/ 

京东云峰:http://maengine.jd.com/product/162 

易测云:http://www.yiceyun.com/ 

百度云众测:http://developer.baidu.com/yunzhongce 

Testbird:http://www.testbird.com/ 

中国移动终端池:http://dev.10086.cn/rts/rts/rts-home.do 


其它云服务

汉王云:http://developer.hanvon.com/ 

搜狐sendcloud:http://sendcloud.sohu.com/ 

灵云:http://www.hcicloud.com/ 

百度定位云:http://lbsyun.baidu.com/location 

百度地址搜索云:http://lbsyun.baidu.com/search 

kiiCloud:http://cn.kii.com/ 


视频云

保利威视:http://www.polyv.net/ 

CC视频:http://www.bokecc.com/ 

未来云:http://asdtv.com/ 

石山视频:http://smvp.cn/ 

乐视云:http://www.letvcloud.com/ 

遨游讯海:http://www.573v.cn/ 


图片云

又拍云:https://www.upyun.com/index.html 

得图云:http://www.detuyun.com/ 

七牛云存储:http://www.qiniu.com/ 

营销推广


广告平台

酷果广告平台:http://www.kuguopush.com/ 

有米广告:http://www.youmi.net/ 

百度联盟:http://union.baidu.com/ 

多盟广告:http://www.domob.cn/ 

点睛广告平台:http://mjoy.91.com/ 


渠道市场

N多网:http://www.nduoa.com/developer 

魅族开发者中心:http://developer.meizu.com/ 

阿里云开发者市场:http://appdev.yunos.com/ 

中国移动应用商店:http://mm.10086.cn/ 


渠道聚合

AnySDK:http://www.anysdk.com/ 

棱镜SDK:http://www.ljsdk.com/ 

易接SDK:http://www.1sdk.cn/ 

云接入:http://www.allsdk.co/ 

OkSDK:http://www.oksdk.com/ 


营销平台

360点睛营销平台:http://e.360.cn/ 

天地新道:http://www.tiandixindao.com/ 

百度网盟:http://wangmeng.baidu.com/ 

一米线下渠道推广:http://www.yimipingtai.cn/ 

百度推广:http://e.baidu.com/ 

创作平台


站内应用

淘宝卖家服务:http://fuwu.taobao.com/ 

全球速卖通开放平台:http://seller.aliexpress.com/open_platform/index.h 

当当网开放平台:http://open.dangdang.com/ 

拍拍开放平台:http://pop.paipai.com/index 
MongoDB MapReduce mongodb http://www.cnblogs.com/loogn/archive/2012/02/09/2344054.html
MapReduce应该算是MongoDB操作中比较复杂的了,自己开始理解的时候还是动了动脑子的,所以记录在此!

命令语法:详细看

db.runCommand(
 { mapreduce : 字符串,集合名,
   map : 函数,见下文
   reduce : 函数,见下文
   [, query : 文档,发往map函数前先给过渡文档]
   [, sort : 文档,发往map函数前先给文档排序]
   [, limit : 整数,发往map函数的文档数量上限]
   [, out : 字符串,统计结果保存的集合]
   [, keeptemp: 布尔值,链接关闭时临时结果集合是否保存]
   [, finalize : 函数,将reduce的结果送给这个函数,做最后的处理]
   [, scope : 文档,js代码中要用到的变量]
   [, jsMode : 布尔值,是否减少执行过程中BSON和JS的转换,默认true] //注:false时 BSON-->JS-->map-->BSON-->JS-->reduce-->BSON,可处理非常大的mapreduce,<br>                                    //true时BSON-->js-->map-->reduce-->BSON
   [, verbose : 布尔值,是否产生更加详细的服务器日志,默认true]
 }
);
 测试数据:



现在我要统计同一age的name,也就是像如下的结果:

{age:0,names:["name_6","name_12","name_18"]}
{age:1,names:["name_1","name_7","name_13","name_19"]}
......
第一步是写映射(Map)函数,可以简单的理解成分组吧~

var m=function(){
    emit(this.age,this.name);
}
emit的第一个参数是key,就是分组的依据,这是自然是age了,后一个是value,可以是要统计的数据,下面会说明,value可以是JSON对象。
这样m就会把送过来的数据根据key分组了,可以想象成如下结构:

第一组
{key:0,values: ["name_6","name_12","name_18"]

第二组
{key:1,values: ["name_1","name_7","name_13","name_19"]
......
组中的key其实就是age的值了,values是个数组,数组内的成员都有相同的age!!。

第二步就是简化了,编写reduce函数:

var r=function(key,values){
    var ret={age:key,names:values};
    return ret;
}
reduce函数会处理每一个分组,参数也正好是我们想像分组里的key和values。

这里reduce函数只是简单的把key和values包装了一下,因为不用怎么处理就是我们想要的结果了,然后返回一个对象。对象结构正好和我们想象的相符!:

{age:对应的age,names:[名字1,名字2..]}
最后,还可以编写finalize函数对reduce的返回值做最后处理:

var f=function(key,rval){
    if(key==0){
        rval.msg="a new life,baby!";
    }
    return rval
}
这里的key还是上面的key,也就是还是age,rval是reduce的返回值,所以rval的一个实例如:{age:0,names:["name_6","name_12","name_18"]},

这里判断 key 是不是 0 ,如果是而在 rval 对象上加 msg 属性,显然也可以判断 rval.age==0,因为 key 和 rval.age 是相等的嘛!!

这里其他的选项就不说了,一看就知道。

运行:

复制代码
db.runCommand({
    mapreduce:"t",
    map:m,
    reduce:r,
    finalize:f,
    out:"t_age_names"
    }
)
复制代码


结果导入到 t_age_names 集合中,查询出来正是我想要的结果,看一下文档的结构,不难发现,_id 就是 key,value 就是处理后的返回值。
使用Java操作Mongodb mongodb http://blog.csdn.net/hx_uestc/article/details/7620938
HelloWorld程序
  学习任何程序的第一步,都是编写HelloWorld程序,我们也不例外,看下如何通过Java编写一个HelloWorld的程序。
  首先,要通过Java操作Mongodb,必须先下载Mongodb的Java驱动程序,可以在这里下载。
  新建立一个Java工程,将下载的驱动程序放在库文件路径下,程序代码如下:
package com.mkyong.core;
import java.net.UnknownHostException;
import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBCursor;
import com.mongodb.Mongo;
import com.mongodb.MongoException;

/**
* Java + MongoDB Hello world Example
* 
*/
public class App {
    public static void main(String[] args) {
        try {
            //实例化Mongo对象,连接27017端口
            Mongo mongo = new Mongo("localhost", 27017);
                               //连接名为yourdb的数据库,假如数据库不存在的话,mongodb会自动建立
            DB db = mongo.getDB("yourdb");
            // Get collection from MongoDB, database named "yourDB"
//从Mongodb中获得名为yourColleection的数据集合,如果该数据集合不存在,Mongodb会为其新建立
            DBCollection collection = db.getCollection("yourCollection");
    // 使用BasicDBObject对象创建一个mongodb的document,并给予赋值。
            BasicDBObject document = new BasicDBObject();
            document.put("id", 1001);
            document.put("msg", "hello world mongoDB in Java");
            //将新建立的document保存到collection中去
            collection.insert(document);
            // 创建要查询的document
            BasicDBObject searchQuery = new BasicDBObject();
            searchQuery.put("id", 1001);
            // 使用collection的find方法查找document
            DBCursor cursor = collection.find(searchQuery);
            //循环输出结果
            while (cursor.hasNext()) {
            System.out.println(cursor.next());
            }
            System.out.println("Done"); 
        } catch (UnknownHostException e) {
            e.printStackTrace();
        } catch (MongoException e) {
            e.printStackTrace();
        }
    }
}
  最后,输出的结果为:
{ "_id" : { "$oid" : "4dbe5596dceace565d229dc3"} , 
                "id" : 1001 , "msg" : "hello world mongoDB in Java"}
Done

  在上面的例子中,演示了使用Java对Mongodb操作的重要方法和步骤,首先通过创建Mongodb对象,传入构造函数的参数是Mongodb的数据库所在地址和端口,然后使用
  getDB方法获得要连接的数据库名,使用getCollection获得数据集合的名,然后通过新建立BasicDBObject对象去建立document,最后通过collection的insert方法,将建立的document保存到数据库中去。而collection的find方法,则是用来在数据库中查找document。
  从Mongodb中获得collection数据集
  在Mongodb中,可以通过如下方法获得数据库中的collection:
  DBCollection collection = db.getCollection("yourCollection");
  如果你不知道collection的名称,可以使用db.getCollectionNames()获得集合,然后再遍历,如下:
  DB db = mongo.getDB("yourdb");
  Set collections = db.getCollectionNames();
  for(String collectionName : collections){
  System.out.println(collectionName);
  }
  完成的一个例子如下:
package com.mkyong.core;
import java.net.UnknownHostException;
import java.util.Set;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.Mongo;
import com.mongodb.MongoException;
/**
* Java : Get collection from MongoDB
* 
*/
public class GetCollectionApp {
public static void main(String[] args) {
try {
Mongo mongo = new Mongo("localhost", 27017);
DB db = mongo.getDB("yourdb");
Set<String> collections = db.getCollectionNames();
for (String collectionName : collections) {
System.out.println(collectionName);
}
DBCollection collection = db.getCollection("yourCollection");
System.out.println(collection.toString());
System.out.println("Done");

} catch (UnknownHostException e) {
e.printStackTrace();
} catch (MongoException e) {
e.printStackTrace();
}
}
}
  Mongodb中如何插入数据
  下面,讲解下如何使用4种方式,将JSON数据插入到Mongodb中去。首先我们准备JSON
  格式的数据,如下:
  {
  "database" : "mkyongDB",
  "table" : "hosting",
  "detail" :
  {
  records : 99,
  index : "vps_index1",
  active : "true"
  }
  }
  }

  我们希望用不同的方式,通过JAVA代码向Mongodb插入以上格式的JSON数据
  第一种方法,是使用BasicDBObject,方法如下代码所示:
BasicDBObject document = new BasicDBObject();
document.put("database", "mkyongDB");
document.put("table", "hosting");
BasicDBObject documentDetail = new BasicDBObject();
documentDetail.put("records", "99");
documentDetail.put("index", "vps_index1");
documentDetail.put("active", "true");
document.put("detail", documentDetail);
collection.insert(document);
  第二种方法是使用BasicDBObjectBuilder对象,如下代码所示:
  BasicDBObjectBuilder documentBuilder = BasicDBObjectBuilder.start()
  .add("database", "mkyongDB")
  .add("table", "hosting");
  BasicDBObjectBuilder documentBuilderDetail = BasicDBObjectBuilder.start()
  .add("records", "99")
  .add("index", "vps_index1")
  .add("active", "true");
  documentBuilder.add("detail", documentBuilderDetail.get());
  collection.insert(documentBuilder.get());
  第三种方法是使用Map对象,代码如下:
  Map documentMap =new HashMap();
  documentMap.put("database", "mkyongDB");
  documentMap.put("table", "hosting");
  Map documentMapDetail =new HashMap();
  documentMapDetail.put("records", "99");
  documentMapDetail.put("index", "vps_index1");
  documentMapDetail.put("active", "true");
  documentMap.put("detail", documentMapDetail);
  collection.insert(new BasicDBObject(documentMap));
  第四种方法,也就是最简单的,即直接插入JSON格式数据
  String json ="{'database' : 'mkyongDB','table' : 'hosting',"+
  "'detail' : {'records' : 99, 'index' : 'vps_index1', 'active' : 'true'}}}";
  DBObject dbObject =(DBObject)JSON.parse(json);
  collection.insert(dbObject);
  这里使用了JSON的parse方法,将解析后的JSON字符串转变为DBObject对象后再直接插入到collection中去。

  完整的代码如下所示:
  packagecom.mkyong.core;
  importjava.net.UnknownHostException;
  importjava.util.HashMap;
  importjava.util.Map;
  importcom.mongodb.BasicDBObject;
  importcom.mongodb.BasicDBObjectBuilder;
  importcom.mongodb.DB;
  importcom.mongodb.DBCollection;
  importcom.mongodb.DBCursor;
  importcom.mongodb.DBObject;
  importcom.mongodb.Mongo;
  importcom.mongodb.MongoException;
  importcom.mongodb.util.JSON;
  /**
  * Java MongoDB : Insert a Document
  *
  */
  publicclass InsertDocumentApp {
  publicstaticvoid main(String[] args){
  try{
  Mongo mongo =new Mongo("localhost", 27017);
  DB db = mongo.getDB("yourdb");
  // get a single collection
  DBCollection collection = db.getCollection("dummyColl");
  // BasicDBObject example
  System.out.println("BasicDBObject example...");
  BasicDBObject document =new BasicDBObject();
  document.put("database", "mkyongDB");
  document.put("table", "hosting");
  BasicDBObject documentDetail =new BasicDBObject();
  documentDetail.put("records", "99");
  documentDetail.put("index", "vps_index1");
  documentDetail.put("active", "true");
  document.put("detail", documentDetail);
  collection.insert(document);
  DBCursor cursorDoc = collection.find();
  while(cursorDoc.hasNext()){
  System.out.println(cursorDoc.next());
  }
  collection.remove(new BasicDBObject());
  // BasicDBObjectBuilder example
  System.out.println("BasicDBObjectBuilder example...");
  BasicDBObjectBuilder documentBuilder = BasicDBObjectBuilder.start()
  .add("database", "mkyongDB")
  .add("table", "hosting");
  BasicDBObjectBuilder documentBuilderDetail = BasicDBObjectBuilder.start()
  .add("records", "99")
  .add("index", "vps_index1")
  .add("active", "true");
  documentBuilder.add("detail", documentBuilderDetail.get());
  collection.insert(documentBuilder.get());
  DBCursor cursorDocBuilder = collection.find();
  while(cursorDocBuilder.hasNext()){
  System.out.println(cursorDocBuilder.next());
  }
  collection.remove(new BasicDBObject());
  // Map example
  System.out.println("Map example...");
  Map documentMap =new HashMap();
  documentMap.put("database", "mkyongDB");
  documentMap.put("table", "hosting");
  Map documentMapDetail =new HashMap();
  documentMapDetail.put("records", "99");
  documentMapDetail.put("index", "vps_index1");
  documentMapDetail.put("active", "true");
  documentMap.put("detail", documentMapDetail);
  collection.insert(new BasicDBObject(documentMap));
  DBCursor cursorDocMap = collection.find();
  while(cursorDocMap.hasNext()){
  System.out.println(cursorDocMap.next());
  }
  collection.remove(new BasicDBObject());
  // JSON parse example
  System.out.println("JSON parse example...");
  String json ="{'database' : 'mkyongDB','table' : 'hosting',"+
  "'detail' : {'records' : 99, 'index' : 'vps_index1', 'active' : 'true'}}}";
  DBObject dbObject =(DBObject)JSON.parse(json);
  collection.insert(dbObject);
  DBCursor cursorDocJSON = collection.find();
  while(cursorDocJSON.hasNext()){
  System.out.println(cursorDocJSON.next());
  }
  collection.remove(new BasicDBObject());
  }catch(UnknownHostException e){
  e.printStackTrace();
  }catch(MongoException e){
  e.printStackTrace();
  }
  }
  }

  更新Document
  假设如下的JSON格式的数据已经保存到Mongodb中去了,现在要更新相关的数据。
  {"_id" : {"$oid" : "x"} , "hosting" : "hostA" , "type" : "vps" , "clients" : 1000}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostB" , "type" : "dedicated server" , "clients" : 100}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostC" , "type" : "vps" , "clients" : 900}
   假设现在要将hosting中值为hostB的进行更新,则可以使用如下的方法:
  BasicDBObject newDocument =new BasicDBObject();
  newDocument.put("hosting", "hostB");
  newDocument.put("type", "shared host");
  newDocument.put("clients", 111);
  collection.update(new BasicDBObject().append("hosting", "hostB"), newDocument);
   可以看到,这里依然使用了BasicDBObject对象,并为其赋值了新的值后,然后使用collection的update方法,即可更新该对象。
  更新后的输出如下:
  {"_id" : {"$oid" : "x"} , "hosting" : "hostA" , "type" : "vps" , "clients" : 1000}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostB" , "type" : "shared host" , "clients" : 111}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostC" , "type" : "vps" , "clients" : 900}
   另外,还可以使用mongodb中的$inc修饰符号去对某个值进行更新,比如,要将hosting值为hostB的document的clients的值得更新为199(即100+99=199),可以这样:
  BasicDBObject newDocument =new BasicDBObject().append("$inc",
  new BasicDBObject().append("clients", 99));
  collection.update(new BasicDBObject().append("hosting", "hostB"), newDocument);
   则输出如下:
  {"_id" : {"$oid" : "x"} , "hosting" : "hostA" , "type" : "vps" , "clients" : 1000}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostB" , "type" : "dedicated server" , "clients" : 199}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostC" , "type" : "vps" , "clients" : 900}
   接下来,讲解$set修饰符的使用。比如要把hosting中值为hostA的document中的
  type的值进行修改,则可以如下实现:
  BasicDBObject newDocument3 =new BasicDBObject().append("$set",
  new BasicDBObject().append("type", "dedicated server"));
  collection.update(new BasicDBObject().append("hosting", "hostA"), newDocument3);
   则输出如下,把type的值从vps改为dedicated server:
  {"_id" : {"$oid" : "x"} , "hosting" : "hostB" , "type" : "dedicated server" , "clients" : 100}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostC" , "type" : "vps" , "clients" : 900}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostA" , "clients" : 1000 , "type" : "dedicated server"}
   要注意的是,如果不使用$set的修饰符,而只是如下代码:
  BasicDBObject newDocument3 =new BasicDBObject().append("type", "dedicated server");
  collection.update(new BasicDBObject().append("hosting", "hostA"), newDocument3);
   则会将所有的三个document的type类型都改为dedicated server了,因此要使用$set以更新特定的document的特定的值。
  如果要更新多个document中相同的值,可以使用$multi,比如,要把所有vps为type的document,将它们的clients的值更新为888,可以如下实现:
  BasicDBObject updateQuery =new BasicDBObject().append("$set",
  new BasicDBObject().append("clients", "888"));
  collection.update(new BasicDBObject().append("type", "vps"), updateQuery, false, true);
 
  输出如下:
  {"_id" : {"$oid" : "x"} , "hosting" : "hostA" , "clients" : "888" , "type" : "vps"}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostB" , "type" : "dedicated server" , "clients" : 100}
  {"_id" : {"$oid" : "x"} , "hosting" : "hostC" , "clients" : "888" , "type" : "vps"}

  最后,还是给出更新document的完整例子:
  package com.liao;
  import java.net.UnknownHostException;
  import com.mongodb.BasicDBObject;
  import com.mongodb.DB;
  import com.mongodb.DBCollection;
  import com.mongodb.DBCursor;
  import com.mongodb.Mongo;
  import com.mongodb.MongoException;
  publicclass UpdateDocumentApp {
  publicstaticvoid printAllDocuments(DBCollection collection){
  DBCursor cursor = collection.find();
  while (cursor.hasNext()) {
  System.out.println(cursor.next());
  }
  }
  publicstaticvoid removeAllDocuments(DBCollection collection){
  collection.remove(new BasicDBObject());
  }
  publicstaticvoid insertDummyDocuments(DBCollection collection){
  BasicDBObject document = new BasicDBObject();
  document.put("hosting", "hostA");
  document.put("type", "vps");
  document.put("clients", 1000);
  BasicDBObject document2 = new BasicDBObject();
  document2.put("hosting", "hostB");
  document2.put("type", "dedicated server");
  document2.put("clients", 100);
  BasicDBObject document3 = new BasicDBObject();
  document3.put("hosting", "hostC");
  document3.put("type", "vps");
  document3.put("clients", 900);
  collection.insert(document);
  collection.insert(document2);
  collection.insert(document3);
  }
  publicstaticvoid main(String[] args) {
  try {
  Mongo mongo = new Mongo("localhost", 27017);
  DB db = mongo.getDB("yourdb");
  DBCollection collection = db.getCollection("dummyColl");
  System.out.println("Testing 1...");
  insertDummyDocuments(collection);
  //find hosting = hostB, and update it with new document
  BasicDBObject newDocument = new BasicDBObject();
  newDocument.put("hosting", "hostB");
  newDocument.put("type", "shared host");
  newDocument.put("clients", 111);
  collection.update(new BasicDBObject().append("hosting", "hostB"), newDocument);
  printAllDocuments(collection);
  removeAllDocuments(collection);
  System.out.println("Testing 2...");
  insertDummyDocuments(collection);
  BasicDBObject newDocument2 = new BasicDBObject().append("$inc",
  new BasicDBObject().append("clients", 99));
  collection.update(new BasicDBObject().append("hosting", "hostB"), newDocument2);
  printAllDocuments(collection);
  removeAllDocuments(collection);
  System.out.println("Testing 3...");
  insertDummyDocuments(collection);
  BasicDBObject newDocument3 = new BasicDBObject().append("$set",
  new BasicDBObject().append("type", "dedicated server"));
  collection.update(new BasicDBObject().append("hosting", "hostA"), newDocument3);
  printAllDocuments(collection);
  removeAllDocuments(collection);
  System.out.println("Testing 4...");
  insertDummyDocuments(collection);
  BasicDBObject updateQuery = new BasicDBObject().append("$set",
  new BasicDBObject().append("clients", "888"));
  collection.update(
  new BasicDBObject().append("type", "vps"), updateQuery, false, true);
  printAllDocuments(collection);
  removeAllDocuments(collection);
  System.out.println("Done");
  } catch (UnknownHostException e) {
  e.printStackTrace();
  } catch (MongoException e) {
  e.printStackTrace();
  }
  }
  }

  查询Document
  下面学习如何查询document,先用下面的代码往数据库中插入1-10数字:
  for(int i=1; i <=10; i++){
  collection.insert(new BasicDBObject().append("number", i));

  }
   接下来,看下如下的例子:
  1) 获得数据库中的第一个document:
  DBObject doc = collection.findOne();
  System.out.println(dbObject);
   输出为:
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80bd"} , "number" : 1}
   2)获得document的集合
  DBCursor cursor = collection.find();
  while(cursor.hasNext()){
  System.out.println(cursor.next());
  }
   这里,使用collection.find()方法,获得当前数据库中所有的documents对象集合
  然后通过对DBCursor对象集合的遍历,即可输出当前所有documents。输出如下:
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80bd"} , "number" : 1}
  //..........中间部分省略,为2到9的输出
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c6"} , "number" : 10}
   3) 获取指定的document
  比如要获得number=5的document对象内容,可以使用collection的find方法即可,如下:
  BasicDBObject query =new BasicDBObject();
  query.put("number", 5);
  DBCursor cursor = collection.find(query);
  while(cursor.hasNext()){
  System.out.println(cursor.next());
  }
   即输出:
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c1"} , "number" : 5}
   4) 使用in操作符号
  在mongodb中,也可以使用in操作符,比如要获得number=9和number=10的document对象,可以如下操作:
  BasicDBObject query =new BasicDBObject();
  List list =new ArrayList();
  list.add(9);
  list.add(10);
  query.put("number", new BasicDBObject("$in", list));
  DBCursor cursor = collection.find(query);
  while(cursor.hasNext()){
  System.out.println(cursor.next());
  }
   这里使用了一个List,并将list传入到BasicDBObject的构造函数中,并使用了in操作符号,输出如下:
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c5"} , "number" : 9}
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c6"} , "number" : 10}

  5) 使用>,<等比较符号
  在mongodb中,也可以使用比如>,<等数量比较符号,比如要输出number>5的document集合,则使用“$gt”即可,同理,小于关系则使用$lt,例子如下:
  BasicDBObject query =new BasicDBObject();
  query.put("number", new BasicDBObject("$gt", 5));
  DBCursor cursor = collection.find(query);
  while(cursor.hasNext()){
  System.out.println(cursor.next());
  }
   输出如下:
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c2"} , "number" : 6}
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c3"} , "number" : 7}
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c4"} , "number" : 8}
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c5"} , "number" : 9}
  {"_id" : {"$oid" : "4dc7f7b7bd0fb9a86c6c80c6"} , "number" : 10}
  也可以多个比较符号一起使用,比如要输出number>5和number<8的document,则如下:
  BasicDBObject query =new BasicDBObject();
  query.put("number", new BasicDBObject("$gt", 5).append("$lt", 8));
  DBCursor cursor = collection.find(query);
  while(cursor.hasNext()){
  System.out.println(cursor.next());
  }
   同样,如果是不等于的关系的话,可以使用$ne操作符,如下:
  BasicDBObject query5 =new BasicDBObject();
  query5.put("number", new BasicDBObject("$ne", 8));
  DBCursor cursor6 = collection.find(query5);
  while(cursor6.hasNext()){
  System.out.println(cursor6.next());
  }
   以上输出number=8之外的所有document。
  删除document
  下面我们学习如何删除document,依然以上面的已插入的1-10的documents集合为例说明:
  1) 删除第一个document
  DBObject doc = collection.findOne();
  collection.remove(doc);
   2) 删除指定的document
  比如删除number=2的document,如下方法:
  BasicDBObject document =new BasicDBObject();
  document.put("number", 2);
  collection.remove(document);
   要注意的是,如下的方法将只会删除number=3的document。
  BasicDBObject document =new BasicDBObject();
  document.put("number", 2);
  document.put("number", 3);
  collection.remove(document);

  3) 使用in 操作符号指定删除document
  下面的例子将同时删除number=4和number=5的document,使用的是in操作符
  BasicDBObject query2 =new BasicDBObject();
  List list =new ArrayList();
  list.add(4);
  list.add(5);
  query2.put("number", new BasicDBObject("$in", list));
  collection.remove(query2);
  4) 使用“$gt”删除大于某个值的document
  BasicDBObject query =new BasicDBObject();
  query.put("number", new BasicDBObject("$gt", 9));
  collection.remove(query);
  以上会删除number=10的document。
  5) 删除所有的document
  DBCursor cursor = collection.find();
  while(cursor.hasNext()){
  collection.remove(cursor.next());
  }
  保存图片到Mongodb
  下面将讲解如何使用Java MongoDB GridFS API去保存图片等二进制文件到Monodb,关于Java MongoDB GridFS API的详细论述,请参考http://www.mongodb.org/display/DOCS/GridFS+Specification
  1)保存图片
  代码段如下:
  String newFileName ="mkyong-java-image";
  File imageFile =newFile("c:\\JavaWebHosting.png");
  GridFS gfsPhoto =new GridFS(db, "photo");
  GridFSInputFile gfsFile = gfsPhoto.createFile(imageFile);
  gfsFile.setFilename(newFileName);
  gfsFile.save();
  这里,将c盘下的JavaWebHosting.png保存到mongodb中去,并命名为mkyong-java-image。
  2) 读取图片信息
  代码段如下
  String newFileName ="mkyong-java-image";
  GridFS gfsPhoto =new GridFS(db, "photo");
  GridFSDBFile imageForOutput = gfsPhoto.findOne(newFileName);
  System.out.println(imageForOutput);
  将会输出JSON格式的结果;
  {
  "_id" :
  {
  "$oid" : "4dc9511a14a7d017fee35746"
  } ,
  "chunkSize" : 262144 ,
  "length" : 22672 ,
  "md5" : "1462a6cfa27669af1d8d21c2d7dd1f8b" ,
  "filename" : "mkyong-java-image" ,
  "contentType" : null ,
  "uploadDate" :
  {
  "$date" : "2011-05-10T14:52:10Z"
  } ,
  "aliases" : null
  }
  可以看到,输出的是文件的属性相关信息。

  3) 输出已保存的所有图片
  下面代码段,输出所有保存在photo命名空间下的图片信息:
  GridFS gfsPhoto =new GridFS(db, "photo");
  DBCursor cursor = gfsPhoto.getFileList();
  while(cursor.hasNext()){
  System.out.println(cursor.next());
  }
   4) 从数据库中读取一张图片并另存
  下面的代码段,从数据库中读取一张图片并另存为另外一张图片到磁盘中
  String newFileName ="mkyong-java-image";
  GridFS gfsPhoto =new GridFS(db, "photo");
  GridFSDBFile imageForOutput = gfsPhoto.findOne(newFileName);
  imageForOutput.writeTo("c:\\JavaWebHostingNew.png");
   5) 删除图片
  String newFileName ="mkyong-java-image";
  GridFS gfsPhoto =new GridFS(db, "photo");
  gfsPhoto.remove(gfsPhoto.findOne(newFileName));
   如何将JSON数据格式转化为DBObject格式
  在mongodb中,可以使用com.mongodb.util.JSON类,将JSON格式的字符串转变为DBObject对象。MongoDB for JAVA驱动中提供了用于向数据库中存储普通对象的接口DBObject,当一个文档从MongoDB中取出时,它会自动把文档转换成DBObject接口类型,要将它实例化为需要的对象。比如:
  {
  'name' : 'mkyong',
  'age' : 30
  }
   这样的JSON格式字符串,转换方法为:
  DBObject dbObject =(DBObject) JSON.parse("{'name':'mkyong', 'age':30}");
   完整的代码如下:
  packagecom.mkyong.core;
  importjava.net.UnknownHostException;
  importcom.mongodb.DB;
  importcom.mongodb.DBCollection;
  importcom.mongodb.DBCursor;
  importcom.mongodb.DBObject;
  importcom.mongodb.Mongo;
  importcom.mongodb.MongoException;
  importcom.mongodb.util.JSON;
  /**
  * Java MongoDB : Convert JSON data to DBObject
  *
  */
  publicclass App {
  publicstaticvoid main(String[] args){
  try{
  Mongo mongo =new Mongo("localhost", 27017);
  DB db = mongo.getDB("yourdb");
  DBCollection collection = db.getCollection("dummyColl");
  DBObject dbObject =(DBObject) JSON
  .parse("{'name':'mkyong', 'age':30}");
  collection.insert(dbObject);
  DBCursor cursorDoc = collection.find();
  while(cursorDoc.hasNext()){
  System.out.println(cursorDoc.next());
  }
  System.out.println("Done");
  }catch(UnknownHostException e){
  e.printStackTrace();
  }catch(MongoException e){
  e.printStackTrace();
  }
  }
  }
   则输出为:
  {"_id" : {"$oid" : "4dc9ebb5237f275c2fe4959f"} , "name" : "mkyong" , "age" : 30}
  Done
  可以看到,将JSON格式的数据类型直接转换为mongodb中的文档类型并输出。
  小结:
  本文学习了如何使用Mongodb for JAVA驱动,对mongodb进行日常的数据库操作,比如增加,删除和修改,下一篇教程中,将指导学习Spring对mongodb的操作。
Mongodb集群搭建的三种方式 mongodb http://blog.csdn.net/luonanqin/article/details/8497860
Mongodb是时下流行的NoSql数据库,它的存储方式是文档式存储,并不是Key-Value形式。关于Mongodb的特点,这里就不多介绍了,大家可以去看看官方说明:http://docs.mongodb.org/manual/
下载地址:http://www.mongodb.org/downloads

       今天主要来说说Mongodb的三种集群方式的搭建:Replica Set / Sharding / Master-Slaver。这里只说明最简单的集群搭建方式(生产环境),如果有多个节点可以此类推或者查看官方文档。OS是Ubuntu_x64系统,客户端用的是Java客户端。Mongodb版本是mongodb-linux-x86_64-2.2.2.tgz

Replica Set
       中文翻译叫做副本集,不过我并不喜欢把英文翻译成中文,总是感觉怪怪的。其实简单来说就是集群当中包含了多份数据,保证主节点挂掉了,备节点能继续提供数据服务,提供的前提就是数据需要和主节点一致。如下图:

       Mongodb(M)表示主节点,Mongodb(S)表示备节点,Mongodb(A)表示仲裁节点。主备节点存储数据,仲裁节点不存储数据。客户端同时连接主节点与备节点,不连接仲裁节点。
       默认设置下,主节点提供所有增删查改服务,备节点不提供任何服务。但是可以通过设置使备节点提供查询服务,这样就可以减少主节点的压力,当客户端进行数据查询时,请求自动转到备节点上。这个设置叫做Read Preference Modes,同时Java客户端提供了简单的配置方式,可以不必直接对数据库进行操作。
       仲裁节点是一种特殊的节点,它本身并不存储数据,主要的作用是决定哪一个备节点在主节点挂掉之后提升为主节点,所以客户端不需要连接此节点。这里虽然只有一个备节点,但是仍然需要一个仲裁节点来提升备节点级别。我开始也不相信必须要有仲裁节点,但是自己也试过没仲裁节点的话,主节点挂了备节点还是备节点,所以咱们还是需要它的。
介绍完了集群方案,那么现在就开始搭建了。

1.建立数据文件夹
一般情况下不会把数据目录建立在mongodb的解压目录下,不过这里方便起见,就建在mongodb解压目录下吧。
[plain] view plaincopy
mkdir -p /mongodb/data/master   
mkdir -p /mongodb/data/slaver   
mkdir -p /mongodb/data/arbiter    
#三个目录分别对应主,备,仲裁节点  
2.建立配置文件
由于配置比较多,所以我们将配置写到文件里。
[plain] view plaincopy
#master.conf  
dbpath=/mongodb/data/master  
logpath=/mongodb/log/master.log  
pidfilepath=/mongodb/master.pid  
directoryperdb=true  
logappend=true  
replSet=testrs  
bind_ip=10.10.148.130  
port=27017  
oplogSize=10000  
fork=true  
noprealloc=true  
[plain] view plaincopy
#slaver.conf  
dbpath=/mongodb/data/slaver  
logpath=/mongodb/log/slaver.log  
pidfilepath=/mongodb/slaver.pid  
directoryperdb=true  
logappend=true  
replSet=testrs  
bind_ip=10.10.148.131  
port=27017  
oplogSize=10000  
fork=true  
noprealloc=true  
[plain] view plaincopy
#arbiter.conf  
dbpath=/mongodb/data/arbiter  
logpath=/mongodb/log/arbiter.log  
pidfilepath=/mongodb/arbiter.pid  
directoryperdb=true  
logappend=true  
replSet=testrs  
bind_ip=10.10.148.132  
port=27017  
oplogSize=10000  
fork=true  
noprealloc=true  
参数解释:
dbpath:数据存放目录
logpath:日志存放路径
pidfilepath:进程文件,方便停止mongodb
directoryperdb:为每一个数据库按照数据库名建立文件夹存放
logappend:以追加的方式记录日志
replSet:replica set的名字
bind_ip:mongodb所绑定的ip地址
port:mongodb进程所使用的端口号,默认为27017
oplogSize:mongodb操作日志文件的最大大小。单位为Mb,默认为硬盘剩余空间的5%
fork:以后台方式运行进程
noprealloc:不预先分配存储

3.启动mongodb
进入每个mongodb节点的bin目录下
[java] view plaincopy
./monood -f master.conf  
./mongod -f slaver.conf  
./mongod -f arbiter.conf  
注意配置文件的路径一定要保证正确,可以是相对路径也可以是绝对路径。

4.配置主,备,仲裁节点
可以通过客户端连接mongodb,也可以直接在三个节点中选择一个连接mongodb。
[plain] view plaincopy
./mongo 10.10.148.130:27017   #ip和port是某个节点的地址  
>use admin  
>cfg={ _id:"testrs", members:[ {_id:0,host:'10.10.148.130:27017',priority:2}, {_id:1,host:'10.10.148.131:27017',priority:1},   
{_id:2,host:'10.10.148.132:27017',arbiterOnly:true}] };  
>rs.initiate(cfg)             #使配置生效  
       cfg是可以任意的名字,当然最好不要是mongodb的关键字,conf,config都可以。最外层的_id表示replica set的名字,members里包含的是所有节点的地址以及优先级。优先级最高的即成为主节点,即这里的10.10.148.130:27017。特别注意的是,对于仲裁节点,需要有个特别的配置——arbiterOnly:true。这个千万不能少了,不然主备模式就不能生效。
      配置的生效时间根据不同的机器配置会有长有短,配置不错的话基本上十几秒内就能生效,有的配置需要一两分钟。如果生效了,执行rs.status()命令会看到如下信息:
[plain] view plaincopy
{  
        "set" : "testrs",  
        "date" : ISODate("2013-01-05T02:44:43Z"),  
        "myState" : 1,  
        "members" : [  
                {  
                        "_id" : 0,  
                        "name" : "10.10.148.130:27017",  
                        "health" : 1,  
                        "state" : 1,  
                        "stateStr" : "PRIMARY",  
                        "uptime" : 200,  
                        "optime" : Timestamp(1357285565000, 1),  
                        "optimeDate" : ISODate("2013-01-04T07:46:05Z"),  
                        "self" : true  
                },  
                {  
                        "_id" : 1,  
                        "name" : "10.10.148.131:27017",  
                        "health" : 1,  
                        "state" : 2,  
                        "stateStr" : "SECONDARY",  
                        "uptime" : 200,  
                        "optime" : Timestamp(1357285565000, 1),  
                        "optimeDate" : ISODate("2013-01-04T07:46:05Z"),  
                        "lastHeartbeat" : ISODate("2013-01-05T02:44:42Z"),  
                        "pingMs" : 0  
                },  
                {  
                        "_id" : 2,  
                        "name" : "10.10.148.132:27017",  
                        "health" : 1,  
                        "state" : 7,  
                        "stateStr" : "ARBITER",  
                        "uptime" : 200,  
                        "lastHeartbeat" : ISODate("2013-01-05T02:44:42Z"),  
                        "pingMs" : 0  
                }  
        ],  
        "ok" : 1  
}  
如果配置正在生效,其中会包含如下信息:
[plain] view plaincopy
"stateStr" : "RECOVERING"  

同时可以查看对应节点的日志,发现正在等待别的节点生效或者正在分配数据文件。
       现在基本上已经完成了集群的所有搭建工作。至于测试工作,可以留给大家自己试试。一个是往主节点插入数据,能从备节点查到之前插入的数据(查询备节点可能会遇到某个问题,可以自己去网上查查看)。二是停掉主节点,备节点能变成主节点提供服务。三是恢复主节点,备节点也能恢复其备的角色,而不是继续充当主的角色。二和三都可以通过rs.status()命令实时查看集群的变化。

Sharding分布式分片集群
和Replica Set类似,都需要一个仲裁节点,但是Sharding还需要配置节点和路由节点。就三种集群搭建方式来说,这种是最复杂的。部署图如下:

1.启动数据节点

./mongod --fork --dbpath ../data/ --logpath ../log/set.log --replSet zaqzaq
 note:(关闭节点 ./mongod --shutdown)

参数解释: --dbpath 数据库路径(数据文件)
--logpath 日志文件路径
--master 指定为主机器
--slave 指定为从机器
--source 指定主机器的IP地址
--pologSize 命令行参数(与--master一同使用)配置用于存储给从节点可用的更新信息占用的磁盘空间(M为单位),如果不指定这个参数,默认大小为当前可用磁盘空间的5%(64位机器最小值为1G,32位机器为50M)。
--logappend 日志文件末尾添加
--port 启用端口号
--fork 在后台运行
--only 指定只复制哪一个数据库
--slavedelay 指从复制检测的时间间隔
--auth 是否需要验证权限登录(用户名和密码)

-h [ --help ]             show this usage information
--version                 show version information
-f [ --config ] arg       configuration file specifying additional options
--port arg                specify port number
--bind_ip arg             local ip address to bind listener - all local ips
                           bound by default
-v [ --verbose ]          be more verbose (include multiple times for more
                           verbosity e.g. -vvvvv)
--dbpath arg (=/data/db/) directory for datafiles    指定数据存放目录
--quiet                   quieter output   静默模式
--logpath arg             file to send all output to instead of stdout   指定日志存放目录
--logappend               appnd to logpath instead of over-writing 指定日志是以追加还是以覆盖的方式写入日志文件
--fork                    fork server process   以创建子进程的方式运行
--cpu                     periodically show cpu and iowait utilization 周期性的显示cpu和io的使用情况
--noauth                  run without security 无认证模式运行
--auth                    run with security 认证模式运行
--objcheck                inspect client data for validity on receipt 检查客户端输入数据的有效性检查
--quota                   enable db quota management   开始数据库配额的管理
--quotaFiles arg          number of files allower per db, requires --quota 规定每个数据库允许的文件数
--appsrvpath arg          root directory for the babble app server 
--nocursors               diagnostic/debugging option 调试诊断选项
--nohints                 ignore query hints 忽略查询命中率
--nohttpinterface         disable http interface 关闭http接口,默认是28017
--noscripting             disable scripting engine 关闭脚本引擎
--noprealloc              disable data file preallocation 关闭数据库文件大小预分配
--smallfiles              use a smaller default file size 使用较小的默认文件大小
--nssize arg (=16)        .ns file size (in MB) for new databases 新数据库ns文件的默认大小
--diaglog arg             0=off 1=W 2=R 3=both 7=W+some reads 提供的方式,是只读,只写,还是读写都行,还是主要写+部分的读模式
--sysinfo                 print some diagnostic system information 打印系统诊断信息
--upgrade                 upgrade db if needed 如果需要就更新数据库
--repair                  run repair on all dbs 修复所有的数据库
--notablescan             do not allow table scans 不运行表扫描
--syncdelay arg (=60)     seconds between disk syncs (0 for never) 系统同步刷新磁盘的时间,默认是60s

Replication options:
--master              master mode 主复制模式
--slave               slave mode 从复制模式
--source arg          when slave: specify master as <server:port> 当为从时,指定主的地址和端口
--only arg            when slave: specify a single database to replicate 当为从时,指定需要从主复制的单一库
--pairwith arg        address of server to pair with
--arbiter arg         address of arbiter server 仲裁服务器,在主主中和pair中用到
--autoresync          automatically resync if slave data is stale 自动同步从的数据
--oplogSize arg       size limit (in MB) for op log 指定操作日志的大小
--opIdMem arg         size limit (in bytes) for in memory storage of op ids指定存储操作日志的内存大小

Sharding options:
--configsvr           declare this is a config db of a cluster 指定shard中的配置服务器
--shardsvr            declare this is a shard db of a cluster 指定shard服务器

2.启动配置节点

./mongod --configsvr --dbpath ../config --port 20000 --fork --logpath ../log/conf.log
3.启动路由节点

./mongos --configdb 10.10.8.4:20000 --port 27017 --fork --logpath ../log/root.log  
note:最新版3.0要求配置节点为:1or3 基数个
       这里我们没有用配置文件的方式启动,其中的参数意义大家应该都明白。一般来说一个数据节点对应一个配置节点,仲裁节点则不需要对应的配置节点。注意在启动路由节点时,要将配置节点地址写入到启动命令里。

4.配置Replica Set
       这里可能会有点奇怪为什么Sharding会需要配置Replica Set。其实想想也能明白,多个节点的数据肯定是相关联的,如果不配一个Replica Set,怎么标识是同一个集群的呢。这也是人家mongodb的规定,咱们还是遵守吧。配置方式和之前所说的一样,定一个cfg,然后初始化配置。

./mongo 10.10.8.1:27017   #ip和port是某个节点的地址  
>use admin  
>cfg={ _id:"zaqzaq", members:[ {_id:0,host:'10.10.8.1:27017',priority:2},{_id:1,host:'10.10.8.2:27017',priority:1},   
{_id:2,host:'10.10.8.3:27017',arbiterOnly:true}] };  
>rs.initiate(cfg)             #使配置生效  
note:此时1,2,3号机为一个集群分片 ,可以重复搭建多个集群式分片 
5.配置Sharding
[plain] view plaincopy
./mongo 10.10.8.8   #这里必须连接路由节点  
>sh.addShard("zaqzaq/10.10.8.1:27017") #zaqzaq表示replica set的名字 当把主节点添加到shard以后,会自动找到set里的主,备,决策节点  
>db.runCommand({enableSharding:"diameter_test"})    #diameter_test is database name  
>db.runCommand( { shardCollection: "diameter_test.dcca_dccr_test",key:{"__avpSessionId":1}})   
       第一个命令很容易理解,第二个命令是对需要进行Sharding的数据库进行配置,第三个命令是对需要进行Sharding的Collection进行配置,这里的dcca_dccr_test即为Collection的名字。另外还有个key,这个是比较关键的东西,对于查询效率会有很大的影响,具体可以查看Shard Key Overview
       到这里Sharding也已经搭建完成了,以上只是最简单的搭建方式,其中某些配置仍然使用的是默认配置。如果设置不当,会导致效率异常低下,所以建议大家多看看官方文档再进行默认配置的修改。

Master-Slaver
这个是最简答的集群搭建,不过准确说也不能算是集群,只能说是主备。并且官方已经不推荐这种方式,所以在这里只是简单的介绍下吧,搭建方式也相对简单。
[plain] view plaincopy
./mongod --master --dbpath /data/masterdb/      #主节点  
  
./mongod --slave --source <masterip:masterport> --dbpath /data/slavedb/     备节点  
       基本上只要在主节点和备节点上分别执行这两条命令,Master-Slaver就算搭建完成了。我没有试过主节点挂掉后备节点是否能变成主节点,不过既然已经不推荐了,大家就没必要去使用了。

       以上三种集群搭建方式首选Replica Set,只有真的是大数据,Sharding才能显现威力,毕竟备节点同步数据是需要时间的。Sharding可以将多片数据集中到路由节点上进行一些对比,然后将数据返回给客户端,但是效率还是比较低的说。
java Object类占用内存大小计算 jvm http://blog.csdn.net/aaa1117a8w5s6d/article/details/8254922
在Java中,一个空Object对象的大小是8byte,这个大小只是保存堆中一个没有任何属性的对象的大小。看下面语句:
Object ob = new Object(); 
这样在程序中完成了一个Java对象的生命,但是它所占的空间为:4byte+8byte。4byte是上面部分所说的Java栈中保存引用的所需要的空间。而那8byte则是Java堆中对象的信息。因为所有的Java非基本类型的对象都需要默认继承Object对象,因此不论什么样的Java对象,其大小都必须是大于8byte。
有了Object对象的大小,我们就可以计算其他对象的大小了。
Class NewObject {  
int count;  
boolean flag;  
Object ob;  
} 
其大小为:空对象大小(8byte)+int大小(4byte)+Boolean大小(1byte)+空Object引用的大小(4byte)=17byte。但是因为Java在对对象内存分配时都是以8的整数倍来分,因此大于17byte的最接近8的整数倍的是24,因此此对象的大小为24byte。

本文中,我们讨论一个问题:如何计算(或者说,估算)一个Java对象占用的内存数量?
通常,我们谈论的堆内存使用的前提是以“一般情况”为背景的。不包括下面两种情形:
 
某些情况下,JVM根本就没有把Object放入堆中。例如:原则上讲,一个小的thread-local对象存在于栈中,而不是在堆中。
被Object占用内存的大小依赖于Object的当前状态。例如:Object的同步锁是否生效,或者,Object是否正在被回收。
我们先来看看在堆中单个的Object长什么样子
 


在堆中,每个对象由四个域构成(A、B、C 和 D),下面我们逐个解释一下:
 
A:对象头,占用很少的字节,表述Object当前状态的信息
B:基本类型域占用的空间(原生域指 int、boolean、short等)
C:引用类型域占用的空间(引用类型域指 其他对象的引用,每个引用占用4个字节)
D:填充物占用的空间(后面说明什么是填充物)
下面我们对A、B、C 和 D 逐一解释
A:对象头
内存中,每个对象占用的总空间不仅包含对象内声明的变量所需要的空间,还包括一些额外信息,比如:对象头 和 填充物。“对象头”的作用是用来记录一个对象的实例名字、ID 和 实例状态(例如,当前实例是否“可到达”,或者当前锁的状态等等)。
在当前的JVM版本中(Hotspot),“对象头”占用的字节数如下:
 
一个普通对象,占用8 bytes
数组,占用 12 bytes,包含普通对象的 8 bytes + 4 bytes(数组长度)
B:基本类型
 
boolean、byte 占用 1 byte,char、short 占用 2 bytes,int、float 占用 4 bytes,long、double 占用 8 bytes
C:引用类型
每个引用类型占用 4 bytes
D:填充物
在Hotspot中,每个对象占用的总空间是以8的倍数计算的,对象占用总空间(对象头+声明变量)不足8的倍数时候,自动补齐。而,这些被填充的空间,我们可以称它为“填充物”。我们看下具体实例:
 
一个空对象(没有声明任何变量)占用 8 bytes -- > 对象头 占用 8 bytes
只声明了一个boolean类型变量的类,占用 16 bytes --> 对象头(8 bytes) + boolean (1 bytes) + 填充物(7 bytes)
声明了8个boolean类型变量的类,占用 16 bytes --> 对象头(8 bytes) + boolean (1 bytes) * 8
通过上面的实例,更有助于我们理解
在Tomat7上使用Redis保存Session tomcat session, redis http://my.oschina.net/gccr/blog/321083
使用 Redis 服务器来存储Session非常有优势。首先它是一个NOSQL数据,第二它很容易扩展使用。

下面这种安装方式非常清晰明白的引导你把Redis缓存作为一个Session的存储系统。步骤如下:

1. 下载Redis并且使用下面的命令编译安装:

wget http://download.redis.io/redis-stable.tar.gz 
tar xvzf redis-stable.tar.gz 
cd redis-stable 
make
2. 使用如下命令启动Redis

cd RedisDirectory/src
./redis-server --port 6379
3. 下载最新的Tomcat 7

4. 下载最新的Jedis(一个Redis 的Java客户端),Tomcat Redis Session Manager 和 Apache Commons Pool

5. 将上面所有的Jar包都拷到Tomcat7安装目录下面的Lib目录下

6. 在Tomcat 的conf/context.xml 文件里增加如下内容(或者在server.xml的context块中添加):

<Valve className="com.radiadesign.catalina.session.RedisSessionHandlerValve" />
<Manager className="com.radiadesign.catalina.session.RedisSessionManager"
                   host="localhost" <!-- 可选,默认是"localhost" -->
                   port="6379" <!-- 可选,默认是 "6379" -->
                   database="0" <!-- 可选,默认是 "0" -->
                   maxInactiveInterval="60" <!-- 可选,默认是 "60" (单位:秒)--> />
7. 重启Tomcat7,你现你可以看到,Session的内容开始在Redis中创建了。

现在,Tomcat7的Session就保存到Redis中了,而且它也维护着Session的不同方面。

各个组件的下载地址:

Redis:http://redis.io/
JRedis: https://github.com/xetorthio/jedis
Tomcat Redis Session Manager :https://github.com/jcoleman/tomcat-redis-session-manager/downloads
Apache Commons Pool :http://commons.apache.org/proper/commons-pool/download_pool.cgi
redis-cluster研究和使用 redis http://hot66hot.iteye.com/blog/2050676
一:关于redis cluster
1:redis cluster的现状
reids-cluster计划在redis3.0中推出,可以看作者antirez的声明:http://antirez.com/news/49 (ps:跳票了好久,今年貌似加快速度了),目前的最新版本见:https://raw.githubusercontent.com/antirez/redis/3.0/00-RELEASENOTES
作者的目标:Redis Cluster will support up to ~1000 nodes. 赞...
目前redis支持的cluster特性(已测试):
1):节点自动发现
2):slave->master 选举,集群容错
3):Hot resharding:在线分片
4):集群管理:cluster xxx
5):基于配置(nodes-port.conf)的集群管理
6):ASK 转向/MOVED 转向机制.
2:redis cluster 架构
1)redis-cluster架构图
 
架构细节:
(1)所有的redis节点彼此互联(PING-PONG机制),内部使用二进制协议优化传输速度和带宽.
(2)节点的fail是通过集群中超过半数的节点检测失效时才生效.
(3)客户端与redis节点直连,不需要中间proxy层.客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可
(4)redis-cluster把所有的物理节点映射到[0-16383]slot上,cluster 负责维护node<->slot<->value
2) redis-cluster选举:容错
 
(1)领着选举过程是集群中所有master参与,如果半数以上master节点与master节点通信超过(cluster-node-timeout),认为当前master节点挂掉.
(2):什么时候整个集群不可用(cluster_state:fail)? 
    a:如果集群任意master挂掉,且当前master没有slave.集群进入fail状态,也可以理解成集群的slot映射[0-16383]不完成时进入fail状态. ps : redis-3.0.0.rc1加入cluster-require-full-coverage参数,默认关闭,打开集群兼容部分失败.
    b:如果集群超过半数以上master挂掉,无论是否有slave集群进入fail状态.
  ps:当集群不可用时,所有对集群的操作做都不可用,收到((error) CLUSTERDOWN The cluster is down)错误
二:redis cluster的使用
1:安装redis cluster
1):安装redis-cluster依赖:redis-cluster的依赖库在使用时有兼容问题,在reshard时会遇到各种错误,请按指定版本安装.
(1)确保系统安装zlib,否则gem install会报(no such file to load -- zlib)

#download:zlib-1.2.6.tar  
./configure  
make  
make install  
 (2)安装ruby:version(1.9.2)

# ruby1.9.2   
cd /path/ruby  
./configure -prefix=/usr/local/ruby  
make  
make install  
sudo cp ruby /usr/local/bin  
(3)安装rubygem:version(1.8.16)

# rubygems-1.8.16.tgz  
cd /path/gem  
sudo ruby setup.rb  
sudo cp bin/gem /usr/local/bin  
(4)安装gem-redis:version(3.0.0)

gem install redis --version 3.0.0  
#由于源的原因,可能下载失败,就手动下载下来安装  
#download地址:http://rubygems.org/gems/redis/versions/3.0.0  
gem install -l /data/soft/redis-3.0.0.gem  
(5)安装redis-cluster

cd /path/redis  
make  
sudo cp /opt/redis/src/redis-server /usr/local/bin  
sudo cp /opt/redis/src/redis-cli /usr/local/bin  
sudo cp /opt/redis/src/redis-trib.rb /usr/local/bin  
 
2:配置redis cluster
1)redis配置文件结构:

 使用包含(include)把通用配置和特殊配置分离,方便维护.
2)redis通用配置.

#GENERAL  
daemonize no  
tcp-backlog 511  
timeout 0  
tcp-keepalive 0  
loglevel notice  
databases 16  
dir /opt/redis/data  
slave-serve-stale-data yes  
#slave只读  
slave-read-only yes  
#not use default  
repl-disable-tcp-nodelay yes  
slave-priority 100  
#打开aof持久化  
appendonly yes  
#每秒一次aof写  
appendfsync everysec  
#关闭在aof rewrite的时候对新的写操作进行fsync  
no-appendfsync-on-rewrite yes  
auto-aof-rewrite-min-size 64mb  
lua-time-limit 5000  
#打开redis集群  
cluster-enabled yes  
#节点互连超时的阀值  
cluster-node-timeout 15000  
cluster-migration-barrier 1  
slowlog-log-slower-than 10000  
slowlog-max-len 128  
notify-keyspace-events ""  
hash-max-ziplist-entries 512  
hash-max-ziplist-value 64  
list-max-ziplist-entries 512  
list-max-ziplist-value 64  
set-max-intset-entries 512  
zset-max-ziplist-entries 128  
zset-max-ziplist-value 64  
activerehashing yes  
client-output-buffer-limit normal 0 0 0  
client-output-buffer-limit slave 256mb 64mb 60  
client-output-buffer-limit pubsub 32mb 8mb 60  
hz 10  
aof-rewrite-incremental-fsync yes  
3)redis特殊配置.
Java代码  收藏代码
#包含通用配置  
include /opt/redis/redis-common.conf  
#监听tcp端口  
port 6379  
#最大可用内存  
maxmemory 100m  
#内存耗尽时采用的淘汰策略:  
# volatile-lru -> remove the key with an expire set using an LRU algorithm  
# allkeys-lru -> remove any key accordingly to the LRU algorithm  
# volatile-random -> remove a random key with an expire set  
# allkeys-random -> remove a random key, any key  
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)  
# noeviction -> don't expire at all, just return an error on write operations  
maxmemory-policy allkeys-lru  
#aof存储文件  
appendfilename "appendonly-6379.aof"  
#不开启rdb存储,只用于添加slave过程  
dbfilename dump-6379.rdb  
#cluster配置文件(启动自动生成)  
cluster-config-file nodes-6379.conf  
#部署在同一机器的redis实例,把auto-aof-rewrite搓开,因为cluster环境下内存占用基本一致.  
#防止同意机器下瞬间fork所有redis进程做aof rewrite,占用大量内存(ps:cluster必须开启aof)  
auto-aof-rewrite-percentage 80-100  
3:cluster 操作
cluster集群相关命令,更多redis相关命令见文档:http://redis.readthedocs.org/en/latest/
 

集群  
CLUSTER INFO 打印集群的信息  
CLUSTER NODES 列出集群当前已知的所有节点(node),以及这些节点的相关信息。  
节点  
CLUSTER MEET <ip> <port> 将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。  
CLUSTER FORGET <node_id> 从集群中移除 node_id 指定的节点。  
CLUSTER REPLICATE <node_id> 将当前节点设置为 node_id 指定的节点的从节点。  
CLUSTER SAVECONFIG 将节点的配置文件保存到硬盘里面。  
槽(slot)  
CLUSTER ADDSLOTS <slot> [slot ...] 将一个或多个槽(slot)指派(assign)给当前节点。  
CLUSTER DELSLOTS <slot> [slot ...] 移除一个或多个槽对当前节点的指派。  
CLUSTER FLUSHSLOTS 移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。  
CLUSTER SETSLOT <slot> NODE <node_id> 将槽 slot 指派给 node_id 指定的节点,如果槽已经指派给另一个节点,那么先让另一个节点删除该槽>,然后再进行指派。  
CLUSTER SETSLOT <slot> MIGRATING <node_id> 将本节点的槽 slot 迁移到 node_id 指定的节点中。  
CLUSTER SETSLOT <slot> IMPORTING <node_id> 从 node_id 指定的节点中导入槽 slot 到本节点。  
CLUSTER SETSLOT <slot> STABLE 取消对槽 slot 的导入(import)或者迁移(migrate)。  
键  
CLUSTER KEYSLOT <key> 计算键 key 应该被放置在哪个槽上。  
CLUSTER COUNTKEYSINSLOT <slot> 返回槽 slot 目前包含的键值对数量。  
CLUSTER GETKEYSINSLOT <slot> <count> 返回 count 个 slot 槽中的键。  
 
4:redis cluster 运维操作
1)初始化并构建集群
(1)启动集群相关节点(必须是空节点,beta3后可以是有数据的节点),指定配置文件和输出日志
 
Java代码  收藏代码
redis-server /opt/redis/conf/redis-6380.conf > /opt/redis/logs/redis-6380.log 2>&1 &  
redis-server /opt/redis/conf/redis-6381.conf > /opt/redis/logs/redis-6381.log 2>&1 &  
redis-server /opt/redis/conf/redis-6382.conf > /opt/redis/logs/redis-6382.log 2>&1 &  
redis-server /opt/redis/conf/redis-7380.conf > /opt/redis/logs/redis-7380.log 2>&1 &  
redis-server /opt/redis/conf/redis-7381.conf > /opt/redis/logs/redis-7381.log 2>&1 &  
redis-server /opt/redis/conf/redis-7382.conf > /opt/redis/logs/redis-7382.log 2>&1 &  
 
(2):使用自带的ruby工具(redis-trib.rb)构建集群
 

#redis-trib.rb的create子命令构建  
#--replicas 则指定了为Redis Cluster中的每个Master节点配备几个Slave节点  
#节点角色由顺序决定,先master之后是slave(为方便辨认,slave的端口比master大1000)  
redis-trib.rb create --replicas 1 10.10.34.14:6380 10.10.34.14:6381 10.10.34.14:6382 10.10.34.14:7380 10.10.34.14:7381 10.10.34.14:7382  
(3):检查集群状态

#redis-trib.rb的check子命令构建  
#ip:port可以是集群的任意节点  
redis-trib.rb check 10.10.34.14:6380  
 最后输出如下信息,没有任何警告或错误,表示集群启动成功并处于ok状态

[OK] All nodes agree about slots configuration.  
>>> Check for open slots...  
>>> Check slots coverage...  
[OK] All 16384 slots covered.  
2):添加新master节点
(1)添加一个master节点:创建一个空节点(empty node),然后将某些slot移动到这个空节点上,这个过程目前需要人工干预
a):根据端口生成配置文件(ps:establish_config.sh是我自己写的输出配置脚本)
 

sh establish_config.sh 6386 > conf/redis-6386.conf  
 
b):启动节点
 

redis-server /opt/redis/conf/redis-6386.conf > /opt/redis/logs/redis-6386.log 2>&1 &  
c):加入空节点到集群
add-node  将一个节点添加到集群里面, 第一个是新节点ip:port, 第二个是任意一个已存在节点ip:port
 

redis-trib.rb add-node 10.10.34.14:6386 10.10.34.14:6381  
node:新节点没有包含任何数据, 因为它没有包含任何slot。新加入的加点是一个主节点, 当集群需要将某个从节点升级为新的主节点时, 这个新节点不会被选中
 
d):为新节点分配slot
 

redis-trib.rb reshard 10.10.34.14:6386  
#根据提示选择要迁移的slot数量(ps:这里选择500)  
How many slots do you want to move (from 1 to 16384)? 500  
#选择要接受这些slot的node-id  
What is the receiving node ID? f51e26b5d5ff74f85341f06f28f125b7254e61bf  
#选择slot来源:  
#all表示从所有的master重新分配,  
#或者数据要提取slot的master节点id,最后用done结束  
Please enter all the source node IDs.  
  Type 'all' to use all the nodes as source nodes for the hash slots.  
  Type 'done' once you entered all the source nodes IDs.  
Source node #1:all  
#打印被移动的slot后,输入yes开始移动slot以及对应的数据.  
#Do you want to proceed with the proposed reshard plan (yes/no)? yes  
#结束  
3):添加新的slave节点
a):前三步操作同添加master一样
b)第四步:redis-cli连接上新节点shell,输入命令:cluster replicate 对应master的node-id
 

cluster replicate 2b9ebcbd627ff0fd7a7bbcc5332fb09e72788835  
 
note:在线添加slave 时,需要dump整个master进程,并传递到slave,再由 slave加载rdb文件到内存,rdb传输过程中Master可能无法提供服务,整个过程消耗大量io,小心操作.
例如本次添加slave操作产生的rdb文件
 

-rw-r--r-- 1 root root  34946 Apr 17 18:23 dump-6386.rdb  
-rw-r--r-- 1 root root  34946 Apr 17 18:23 dump-7386.rdb  
4):在线reshard 数据:
对于负载/数据不均匀的情况,可以在线reshard slot来解决,方法与添加新master的reshard一样,只是需要reshard的master节点是老节点.
5):删除一个slave节点

#redis-trib del-node ip:port '<node-id>'  
redis-trib.rb del-node 10.10.34.14:7386 'c7ee2fca17cb79fe3c9822ced1d4f6c5e169e378'  
 6):删除一个master节点
  a):删除master节点之前首先要使用reshard移除master的全部slot,然后再删除当前节点(目前只能把被删除
master的slot迁移到一个节点上)
 

#把10.10.34.14:6386当前master迁移到10.10.34.14:6380上  
redis-trib.rb reshard 10.10.34.14:6380  
#根据提示选择要迁移的slot数量(ps:这里选择500)  
How many slots do you want to move (from 1 to 16384)? 500(被删除master的所有slot数量)  
#选择要接受这些slot的node-id(10.10.34.14:6380)  
What is the receiving node ID? c4a31c852f81686f6ed8bcd6d1b13accdc947fd2 (ps:10.10.34.14:6380的node-id)  
Please enter all the source node IDs.  
  Type 'all' to use all the nodes as source nodes for the hash slots.  
  Type 'done' once you entered all the source nodes IDs.  
Source node #1:f51e26b5d5ff74f85341f06f28f125b7254e61bf(被删除master的node-id)  
Source node #2:done  
#打印被移动的slot后,输入yes开始移动slot以及对应的数据.  
#Do you want to proceed with the proposed reshard plan (yes/no)? yes  
 
b):删除空master节点
 

redis-trib.rb del-node 10.10.34.14:6386 'f51e26b5d5ff74f85341f06f28f125b7254e61bf'  
三:redis cluster 客户端(Jedis)
1:客户端基本操作使用
 

private static BinaryJedisCluster jc;  
  static {  
       //只给集群里一个实例就可以  
        Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 6380));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 6381));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 6382));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 6383));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 6384));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 7380));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 7381));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 7382));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 7383));  
        jedisClusterNodes.add(new HostAndPort("10.10.34.14", 7384));  
        jc = new BinaryJedisCluster(jedisClusterNodes);  
    }  
@Test  
    public void testBenchRedisSet() throws Exception {  
        final Stopwatch stopwatch = new Stopwatch();  
        List list = buildBlogVideos();  
        for (int i = 0; i < 1000; i++) {  
            String key = "key:" + i;  
            stopwatch.start();  
            byte[] bytes1 = protostuffSerializer.serialize(list);  
            jc.setex(key, 60 * 60, bytes1);  
            stopwatch.stop();  
        }  
        System.out.println("time=" + stopwatch.toString());  
    }</span>  
2:jedis客户端的坑.(alert)
1)cluster环境下redis的slave不接受任何读写操作,
2)client端不支持keys批量操作,不支持select dbNum操作,只有一个db:select 0
3)JedisCluster 的info()等单机函数无法调用,返回(No way to dispatch this command to Redis Cluster)错误,.
4)JedisCluster 没有针对byte[]的API,需要自己扩展(附件是我加的基于byte[]的BinaryJedisCluster  api)
5)集群中如果存在密码那也用不了。。。。(重启redis服务吧)



package redis.clients.jedis;

import redis.clients.jedis.BinaryClient.LIST_POSITION;
import redis.clients.util.SafeEncoder;

import java.nio.charset.Charset;
import java.util.List;
import java.util.Map;
import java.util.Set;

/**
 * Created by sohu.yijunzhang on 14-4-19.
 */
public class BinaryJedisCluster extends JedisCluster {

    public BinaryJedisCluster(Set<HostAndPort> nodes, int timeout) {
        super(nodes, timeout);
    }

    public BinaryJedisCluster(Set<HostAndPort> nodes) {
        super(nodes);
    }

    public BinaryJedisCluster(Set<HostAndPort> jedisClusterNode, int timeout, int maxRedirections) {
        super(jedisClusterNode, timeout, maxRedirections);
    }


    public String set(final String key, final byte[] value) {
        return new JedisClusterCommand<String>(connectionHandler, timeout,
                maxRedirections) {

            public String execute(Jedis connection) {
                return connection.set(SafeEncoder.encode(key), value);
            }
        }.run(key);
    }

    public byte[] getBytes(final String key) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.get(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public Boolean setbit(final String key, final long offset,
                          final byte[] value) {
        return new JedisClusterCommand<Boolean>(connectionHandler, timeout,
                maxRedirections) {

            public Boolean execute(Jedis connection) {
                return connection.setbit(SafeEncoder.encode(key), offset,
                        value);
            }
        }.run(key);
    }


    public Long setrange(final String key, final long offset, final byte[] value) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.setrange(SafeEncoder.encode(key), offset,
                        value);
            }
        }.run(key);
    }


    public byte[] getrangeBytes(final String key, final long startOffset,
                                final long endOffset) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.getrange(SafeEncoder.encode(key),
                        startOffset, endOffset);
            }
        }.run(key);
    }


    public byte[] getSetBytes(final String key, final byte[] value) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.getSet(SafeEncoder.encode(key), value);
            }
        }.run(key);
    }


    public Long setnx(final String key, final byte[] value) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.setnx(SafeEncoder.encode(key), value);
            }
        }.run(key);
    }


    public String setex(final String key, final int seconds, final byte[] value) {
        return new JedisClusterCommand<String>(connectionHandler, timeout,
                maxRedirections) {

            public String execute(Jedis connection) {
                return connection.setex(SafeEncoder.encode(key), seconds,
                        value);
            }
        }.run(key);
    }

    public byte[] substrBytes(final String key, final int start, final int end) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection
                        .substr(SafeEncoder.encode(key), start, end);
            }
        }.run(key);
    }


    public Long hset(final String key, final String field, final byte[] value) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection
                        .hset(SafeEncoder.encode(key), field.getBytes(Charset.defaultCharset()), value);
            }
        }.run(key);
    }


    public byte[] hgetBytes(final String key, final String field) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.hget(SafeEncoder.encode(key), field.getBytes(Charset.defaultCharset()));
            }
        }.run(key);
    }


    public Long hsetnx(final String key, final String field, final byte[] value) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.hsetnx(SafeEncoder.encode(key), field.getBytes(Charset.defaultCharset()),
                        value);
            }
        }.run(key);
    }


    public String hmsetBytes(final String key, final Map<byte[], byte[]> hash) {
        return new JedisClusterCommand<String>(connectionHandler, timeout,
                maxRedirections) {

            public String execute(Jedis connection) {
                return connection.hmset(SafeEncoder.encode(key), hash);
            }
        }.run(key);
    }


    public List<byte[]> hmget(final String key, final byte[]... fields) {
        return new JedisClusterCommand<List<byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public List<byte[]> execute(Jedis connection) {
                return connection.hmget(SafeEncoder.encode(key), fields);
            }
        }.run(key);
    }


    public Set<byte[]> hkeysBytes(final String key) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.hkeys(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public List<byte[]> hvalsBytes(final String key) {
        return new JedisClusterCommand<List<byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public List<byte[]> execute(Jedis connection) {
                return connection.hvals(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public Map<byte[], byte[]> hgetAllBytes(final String key) {
        return new JedisClusterCommand<Map<byte[], byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public Map<byte[], byte[]> execute(Jedis connection) {
                return connection.hgetAll(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public Long rpush(final String key, final byte[]... string) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.rpush(SafeEncoder.encode(key), string);
            }
        }.run(key);
    }


    public Long lpush(final String key, final byte[]... string) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.lpush(SafeEncoder.encode(key), string);
            }
        }.run(key);
    }


    public List<byte[]> lrangeBytes(final String key, final long start,
                               final long end) {
        return new JedisClusterCommand<List<byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public List<byte[]> execute(Jedis connection) {
                return connection
                        .lrange(SafeEncoder.encode(key), start, end);
            }
        }.run(key);
    }


    public byte[] lindexBytes(final String key, final long index) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.lindex(SafeEncoder.encode(key), index);
            }
        }.run(key);
    }


    public String lset(final String key, final long index, final byte[] value) {
        return new JedisClusterCommand<String>(connectionHandler, timeout,
                maxRedirections) {

            public String execute(Jedis connection) {
                return connection
                        .lset(SafeEncoder.encode(key), index, value);
            }
        }.run(key);
    }


    public Long lrem(final String key, final long count, final byte[] value) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection
                        .lrem(SafeEncoder.encode(key), count, value);
            }
        }.run(key);
    }


    public byte[] lpopBytes(final String key) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.lpop(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public byte[] rpopBytes(final String key) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.rpop(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public Long sadd(final String key, final byte[]... member) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.sadd(SafeEncoder.encode(key), member);
            }
        }.run(key);
    }


    public Set<byte[]> smembersBytes(final String key) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.smembers(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public Long srem(final String key, final byte[]... member) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.srem(SafeEncoder.encode(key), member);
            }
        }.run(key);
    }


    public byte[] spopBytes(final String key) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.spop(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public Boolean sismember(final String key, final byte[] member) {
        return new JedisClusterCommand<Boolean>(connectionHandler, timeout,
                maxRedirections) {

            public Boolean execute(Jedis connection) {
                return connection.sismember(SafeEncoder.encode(key), member);
            }
        }.run(key);
    }


    public byte[] srandmemberBytes(final String key) {
        return new JedisClusterCommand<byte[]>(connectionHandler, timeout,
                maxRedirections) {

            public byte[] execute(Jedis connection) {
                return connection.srandmember(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public Long zadd(final String key, final double score, final byte[] member) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.zadd(SafeEncoder.encode(key), score,
                        member);
            }
        }.run(key);
    }


    public Set<byte[]> zrangeBytes(final String key, final long start, final long end) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection
                        .zrange(SafeEncoder.encode(key), start, end);
            }
        }.run(key);
    }


    public Long zrem(final String key, final byte[]... member) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.zrem(SafeEncoder.encode(key), member);
            }
        }.run(key);
    }


    public Double zincrby(final String key, final double score,
                          final byte[] member) {
        return new JedisClusterCommand<Double>(connectionHandler, timeout,
                maxRedirections) {

            public Double execute(Jedis connection) {
                return connection.zincrby(SafeEncoder.encode(key), score,
                        member);
            }
        }.run(key);
    }


    public Long zrank(final String key, final byte[] member) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.zrank(SafeEncoder.encode(key), member);
            }
        }.run(key);
    }


    public Long zrevrank(final String key, final byte[] member) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.zrevrank(SafeEncoder.encode(key), member);
            }
        }.run(key);
    }


    public Set<byte[]> zrevrangeBytes(final String key, final long start,
                                 final long end) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrevrange(SafeEncoder.encode(key), start,
                        end);
            }
        }.run(key);
    }


    public Double zscore(final String key, final byte[] member) {
        return new JedisClusterCommand<Double>(connectionHandler, timeout,
                maxRedirections) {

            public Double execute(Jedis connection) {
                return connection.zscore(SafeEncoder.encode(key), member);
            }
        }.run(key);
    }


    public List<byte[]> sortBytes(final String key) {
        return new JedisClusterCommand<List<byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public List<byte[]> execute(Jedis connection) {
                return connection.sort(SafeEncoder.encode(key));
            }
        }.run(key);
    }


    public List<byte[]> sortBytes(final String key,
                             final SortingParams sortingParameters) {
        return new JedisClusterCommand<List<byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public List<byte[]> execute(Jedis connection) {
                return connection.sort(SafeEncoder.encode(key),
                        sortingParameters);
            }
        }.run(key);
    }


    public Set<byte[]> zrangeByScoreBytes(final String key, final double min,
                                     final double max) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrangeByScore(SafeEncoder.encode(key),
                        min, max);
            }
        }.run(key);
    }


    public Set<byte[]> zrangeByScoreBytes(final String key, final String min,
                                     final String max) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrangeByScore(SafeEncoder.encode(key),
                        SafeEncoder.encode(min), SafeEncoder.encode(max));
            }
        }.run(key);
    }


    public Set<byte[]> zrevrangeByScoreBytes(final String key, final double max,
                                        final double min) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrevrangeByScore(SafeEncoder.encode(key),
                        max, min);
            }
        }.run(key);
    }


    public Set<byte[]> zrangeByScoreBytes(final String key, final double min,
                                     final double max, final int offset, final int count) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrangeByScore(SafeEncoder.encode(key),
                        min, max, offset, count);
            }
        }.run(key);
    }


    public Set<byte[]> zrevrangeByScoreBytes(final String key, final String max,
                                        final String min) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrevrangeByScore(SafeEncoder.encode(key),
                        SafeEncoder.encode(max), SafeEncoder.encode(min));
            }
        }.run(key);
    }


    public Set<byte[]> zrangeByScoreBytes(final String key, final String min,
                                     final String max, final int offset, final int count) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrangeByScore(SafeEncoder.encode(key),
                        SafeEncoder.encode(min), SafeEncoder.encode(max), offset, count);
            }
        }.run(key);
    }


    public Set<byte[]> zrevrangeByScoreBytes(final String key, final double max,
                                        final double min, final int offset, final int count) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrevrangeByScore(SafeEncoder.encode(key),
                        max, min, offset, count);
            }
        }.run(key);
    }


    public Set<byte[]> zrevrangeByScoreBytes(final String key, final String max,
                                        final String min, final int offset, final int count) {
        return new JedisClusterCommand<Set<byte[]>>(connectionHandler, timeout,
                maxRedirections) {

            public Set<byte[]> execute(Jedis connection) {
                return connection.zrevrangeByScore(SafeEncoder.encode(key),
                        SafeEncoder.encode(max), SafeEncoder.encode(min), offset, count);
            }
        }.run(key);
    }


    public Long linsert(final String key, final LIST_POSITION where,
                        final byte[] pivot, final byte[] value) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.linsert(SafeEncoder.encode(key), where,
                        pivot, value);
            }
        }.run(key);
    }


    public Long lpushx(final String key, final byte[]... string) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.lpushx(SafeEncoder.encode(key), string);
            }
        }.run(key);
    }


    public Long rpushx(final String key, final byte[]... string) {
        return new JedisClusterCommand<Long>(connectionHandler, timeout,
                maxRedirections) {

            public Long execute(Jedis connection) {
                return connection.rpushx(SafeEncoder.encode(key), string);
            }
        }.run(key);
    }


    public List<byte[]> blpopBytes(final String arg) {
        return new JedisClusterCommand<List<byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public List<byte[]> execute(Jedis connection) {
                return connection.blpop(SafeEncoder.encode(arg));
            }
        }.run(null);
    }


    public List<byte[]> brpopBytes(final String arg) {
        return new JedisClusterCommand<List<byte[]>>(connectionHandler,
                timeout, maxRedirections) {

            public List<byte[]> execute(Jedis connection) {
                return connection.brpop(SafeEncoder.encode(arg));
            }
        }.run(null);
    }

}


https://github.com/zaqzaq/jedis 我的redis客户端,增加集群byte传输(BinaryJedisCluster.java)和修复(JedisCluster.java)flushdb,flushall,dbsize调用时的bug,其它未修复,因为没有用到(主要用在与mybaits整合)
 
redis3 密码验证 redis http://www.linuxidc.com/Linux/2014-07/104136.htm
一. 如何初始化redis的密码?
 
总共2个步骤:
 
a.在配置文件中有个参数: requirepass  这个就是配置redis访问密码的参数。
 
比如 requirepass test123
 
b.配置文件中参数生效需要重启重启redis 。
 
 
 
二.不重启redis如何配置密码?
 
a. 在配置文件中配置requirepass的密码(当redis重启时密码依然有效)。
 
# requirepass foobared
  如  修改成 :
 
requirepass  test123
 
 
 
b. 进入redis重定义参数
 
查看当前的密码:
 
[root@slaver251 redis-2.4.16]# ./src/redis-cli -p 6379
 redis 127.0.0.1:6379> 
 redis 127.0.0.1:6379> config get requirepass
 1) "requirepass"
 2) (nil)
 
显示密码是空的,
 
然后设置密码:
 
redis 127.0.0.1:6379> config set requirepass test123
 OK
 
再次查询密码:
 
redis 127.0.0.1:6379> config get requirepass
 (error) ERR operation not permitted
 
此时报错了!
 
现在只需要密码认证就可以了。
 
redis 127.0.0.1:6379> auth test123
 OK
 
再次查询密码:
 
redis 127.0.0.1:6379> config get requirepass
 1) "requirepass"
 2) "test123"
 
密码已经得到修改。
 
当到了可以重启redis的时候 由于配置参数已经修改 所以密码会自动生效。
 
要是配置参数没添加密码 那么redis重启 密码将相当于没有设置。
 
 
 
三.如何登录有密码的redis?
 
a.在登录的时候 密码就输入
 
[root@slaver251 redis-2.4.16]# ./src/redis-cli -p 6379 -a test123
 redis 127.0.0.1:6379> 
 redis 127.0.0.1:6379> config get requirepass
 1) "requirepass"
 2) "test123"
 
 
 
b.先登录再验证:
 
[root@slaver251 redis-2.4.16]#  ./src/redis-cli -p 6379
 redis 127.0.0.1:6379> 
 redis 127.0.0.1:6379> auth test123
 OK
 redis 127.0.0.1:6379> config get requirepass
 1) "requirepass"
 2) "test123"
 redis 127.0.0.1:6379>
 
 
 
四. master 有密码,slave 如何配置?
 
当master 有密码的时候 配置slave 的时候 相应的密码参数也得相应的配置好。不然slave 是无法进行正常复制的。
 
相应的参数是:
 
#masterauth
 
比如:
 
masterauth  mstpassword

Ubuntu 14.04下Redis安装及简单测试 http://www.linuxidc.com/Linux/2014-05/101544.htm

Redis集群明细文档 http://www.linuxidc.com/Linux/2013-09/90118.htm

Ubuntu 12.10下安装Redis(图文详解)+ Jedis连接Redis http://www.linuxidc.com/Linux/2013-06/85816.htm

Redis系列-安装部署维护篇 http://www.linuxidc.com/Linux/2012-12/75627.htm

CentOS 6.3安装Redis http://www.linuxidc.com/Linux/2012-12/75314.htm

Redis配置文件redis.conf 详解 http://www.linuxidc.com/Linux/2013-11/92524.htm
MySql Host is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' 解决方法 mysql http://www.cnblogs.com/susuyu/archive/2013/05/28/3104249.html
MySql Host is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' 解决方法
环境:linux

错误:Host is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

原因:

  同一个ip在短时间内产生太多(超过mysql数据库max_connection_errors的最大值)中断的数据库连接而导致的阻塞;

解决方法:

1、提高允许的max_connection_errors数量(治标不治本):

  ① 进入Mysql数据库查看max_connection_errors: show variables like '%max_connection_errors%';

    ② 修改max_connection_errors的数量为1000: set global max_connect_errors = 1000;

  ③ 查看是否修改成功:show variables like '%max_connection_errors%';

2、使用mysqladmin flush-hosts 命令清理一下hosts文件(不知道mysqladmin在哪个目录下可以使用命令查找:whereis mysqladmin);

  ① 在查找到的目录下使用命令修改:/usr/bin/mysqladmin flush-hosts -h192.168.1.1 -P3308 -uroot -prootpwd;

  备注:

    其中端口号,用户名,密码都可以根据需要来添加和修改;

    配置有master/slave主从数据库的要把主库和从库都修改一遍的(我就吃了这个亏明明很容易的几条命令结果折腾了大半天);

    第二步也可以在数据库中进行,命令如下:flush hosts;

 
lvs的activeconn,InActConn lvs
lvs的activeconn是个一直让我很迷惑的东东.每次看到这个数巨大而真实机上的活动连接数并不是很高的时候,我都忍不住网上搜索一番,虽然大多时候总是无功而返,但是渐渐的总结出来了以下理论.
      ActiveConn是活动连接数,也就是tcp连接状态的ESTABLISHED;InActConn是指除了ESTABLISHED以外的,所有的其它状态的tcp连接.那既然这样,为什么从lvs里看的ActiveConn会比在真实机上通过netstats看到的ESTABLISHED高很多呢?问得好!这也是笔者一直迷惑而渐渐清晰的一个问题.原来lvs自身也有一个默认超时时间.可以用ipvsadm -L --timeout查看,默认是900 120 300,分别是TCP TCPFIN UDP的时间.也就是说一条tcp的连接经过lvs后,lvs会把这台记录保存15分钟,而不管这条连接是不是已经失效!所以如果你的服务器在15分钟以内有大量的并发请求连进来的时候,你就会看到这个数值直线上升.
      其实很多时候,我们看lvs的这个连接数是想知道现在的每台机器的真实连接数吧?怎么样做到这一点呢?其实知道现在的ActiveConn是怎样产生的,做到这一点就简单了.举个例子:比如你的lvs是用来负载网站,用的模式是dr,后台的web server用的nginx.这时候一条请求过来,在程序没有问题的情况下,一条连接最多也就五秒就断开了.这时候你可以这样设置:ipvsadm --set 5 10 300.设置tcp连接只保持5秒中.如果现在ActiveConn很高你会发现这个数值会很快降下来,直到降到和你用nginx的status看当前连接数的时候差不多.你可以继续增加或者减小5这个数值,直到真实机的status连接数和lvs里的ActiveConn一致.
      that's all.
Redis3集群环境安装指南 redis Redis集群环境安装指南
1.下载redis http://redis.io/download

2.安装 解压后
  # cd redis
  # make install test
  # vi redis.conf
  解开注释:
   cluster-enabled yes
   cluster-config-file nodes-6379.conf
   cluster-node-timeout 5000
   appendonly yes

3.启动
  # src/redis-server redis.conf

4. 停止命令 

停止:#src/redis-cli shutdown 
指定端口停止(没有使用默认端口启动时):redis-cli -p 6380 shutdown 
如果使用了密码:redis-cli -a password shutdown 

5.集群配置(重点)
 #(先安装ruby 并安装ruby和redis的接口)
 # yum -y install ruby
 # sudo gem install redis (可能有点慢)

 
 (note:必须用ip 不能用别名)(--replicas 0 备份的数量 为1则需再加三台备份机-----新加三个redis的服务 节点角 色由顺序决定,先master之后是slave)
 (报错:ERR Slot 16011 is already busy (Redis::CommandError)----是由于上一次配置集群失败时留下的配置信息导致的。 只要把每一台机器redis.conf中定义的 cluster-config-file 所在的文件删除,重新启动redis-server及运行redis-trib即可。)
 # src/redis-trib.rb create --replicas 0 10.10.8.1:6379 10.10.8.5:6379 10.10.8.8:6379 
 


5.测试
  命令行连接:
 #src/redis-cli -c -p 6379
 redis 127.0.0.1:6379>#    set zaqzaq hello
 redis 127.0.0.1:6379>#    get zaqzaq

redis 127.0.0.1:6379> info  #查看server版本内存使用连接等信息

redis 127.0.0.1:6379> client list  #获取客户连接列表

redis 127.0.0.1:6379> client kill 127.0.0.1:33441 #终止某个客户端连接

redis 127.0.0.1:6379> dbsize #当前保存key的数量

redis 127.0.0.1:6379> save #立即保存数据到硬盘

redis 127.0.0.1:6379> bgsave #异步保存数据到硬盘

redis 127.0.0.1:6379> flushdb #当前库中移除所有key

redis 127.0.0.1:6379> flushall #移除所有key从所有库中

redis 127.0.0.1:6379> lastsave #获取上次成功保存到硬盘的unix时间戳

redis 127.0.0.1:6379> monitor #实时监测服务器接收到的请求

redis 127.0.0.1:6379> slowlog len #查询慢查询日志条数
(integer) 3 

redis 127.0.0.1:6379> slowlog get #返回所有的慢查询日志,最大值取决于slowlog-max-len配置

redis 127.0.0.1:6379> slowlog get 2 #打印两条慢查询日志

redis 127.0.0.1:6379> slowlog reset #清空慢查询日志信息

当要新建redis集群时需要用flushdb命令清除本服务的所有数据。。。。。note

什么时候整个集群不可用(cluster_state:fail)? 
    a:如果集群任意master挂掉,且当前master没有slave.集群进入fail状态,也可以理解成集群的slot映射[0-16383]不完成时进入fail状态. ps : redis-3.0.0.rc1加入cluster-require-full-coverage参数,默认关闭,打开集群兼容部分失败.
    b:如果集群超过半数以上master挂掉,无论是否有slave集群进入fail状态.
  ps:当集群不可用时,所有对集群的操作做都不可用,收到((error) CLUSTERDOWN The cluster is down)错误




三、常用命令
    1)连接操作命令
    quit:关闭连接(connection)
    auth:简单密码认证
    help cmd: 查看cmd帮助,例如:help quit
    
    2)持久化
    save:将数据同步保存到磁盘
    bgsave:将数据异步保存到磁盘
    lastsave:返回上次成功将数据保存到磁盘的Unix时戳
    shundown:将数据同步保存到磁盘,然后关闭服务
    
    3)远程服务控制
    info:提供服务器的信息和统计
    monitor:实时转储收到的请求
    slaveof:改变复制策略设置
    config:在运行时配置Redis服务器
    
    4)对value操作的命令
    exists(key):确认一个key是否存在
    del(key):删除一个key
    type(key):返回值的类型
    keys(pattern):返回满足给定pattern的所有key
    randomkey:随机返回key空间的一个
    keyrename(oldname, newname):重命名key
    dbsize:返回当前数据库中key的数目
    expire:设定一个key的活动时间(s)
    ttl:获得一个key的活动时间
    select(index):按索引查询
    move(key, dbindex):移动当前数据库中的key到dbindex数据库
    flushdb:删除当前选择数据库中的所有key
    flushall:删除所有数据库中的所有key
    
    5)String
    set(key, value):给数据库中名称为key的string赋予值value
    get(key):返回数据库中名称为key的string的value
    getset(key, value):给名称为key的string赋予上一次的value
    mget(key1, key2,…, key N):返回库中多个string的value
    setnx(key, value):添加string,名称为key,值为value
    setex(key, time, value):向库中添加string,设定过期时间time
    mset(key N, value N):批量设置多个string的值
    msetnx(key N, value N):如果所有名称为key i的string都不存在
    incr(key):名称为key的string增1操作
    incrby(key, integer):名称为key的string增加integer
    decr(key):名称为key的string减1操作
    decrby(key, integer):名称为key的string减少integer
    append(key, value):名称为key的string的值附加value
    substr(key, start, end):返回名称为key的string的value的子串
    
    6)List 
    rpush(key, value):在名称为key的list尾添加一个值为value的元素
    lpush(key, value):在名称为key的list头添加一个值为value的 元素
    llen(key):返回名称为key的list的长度
    lrange(key, start, end):返回名称为key的list中start至end之间的元素
    ltrim(key, start, end):截取名称为key的list
    lindex(key, index):返回名称为key的list中index位置的元素
    lset(key, index, value):给名称为key的list中index位置的元素赋值
    lrem(key, count, value):删除count个key的list中值为value的元素
    lpop(key):返回并删除名称为key的list中的首元素
    rpop(key):返回并删除名称为key的list中的尾元素
    blpop(key1, key2,… key N, timeout):lpop命令的block版本。
    brpop(key1, key2,… key N, timeout):rpop的block版本。
    rpoplpush(srckey, dstkey):返回并删除名称为srckey的list的尾元素,

              并将该元素添加到名称为dstkey的list的头部
    
    7)Set
    sadd(key, member):向名称为key的set中添加元素member
    srem(key, member) :删除名称为key的set中的元素member
    spop(key) :随机返回并删除名称为key的set中一个元素
    smove(srckey, dstkey, member) :移到集合元素
    scard(key) :返回名称为key的set的基数
    sismember(key, member) :member是否是名称为key的set的元素
    sinter(key1, key2,…key N) :求交集
    sinterstore(dstkey, (keys)) :求交集并将交集保存到dstkey的集合
    sunion(key1, (keys)) :求并集
    sunionstore(dstkey, (keys)) :求并集并将并集保存到dstkey的集合
    sdiff(key1, (keys)) :求差集
    sdiffstore(dstkey, (keys)) :求差集并将差集保存到dstkey的集合
    smembers(key) :返回名称为key的set的所有元素
    srandmember(key) :随机返回名称为key的set的一个元素
    
    8)Hash
    hset(key, field, value):向名称为key的hash中添加元素field
    hget(key, field):返回名称为key的hash中field对应的value
    hmget(key, (fields)):返回名称为key的hash中field i对应的value
    hmset(key, (fields)):向名称为key的hash中添加元素field 
    hincrby(key, field, integer):将名称为key的hash中field的value增加integer
    hexists(key, field):名称为key的hash中是否存在键为field的域
    hdel(key, field):删除名称为key的hash中键为field的域
    hlen(key):返回名称为key的hash中元素个数
    hkeys(key):返回名称为key的hash中所有键
    hvals(key):返回名称为key的hash中所有键对应的value
    hgetall(key):返回名称为key的hash中所有的键(field)及其对应的value

设置密码例子
redis 127.0.0.1:6379> AUTH PASSWORD
(error) ERR Client sent AUTH, but no password is set
redis 127.0.0.1:6379> CONFIG SET requirepass "mypass"
OK
redis 127.0.0.1:6379> AUTH mypass
Ok

http://redis.readthedocs.org/en/latest/topic/cluster-spec.html(牛B的一贴)
http://redis.readthedocs.org/en/latest/
Centos 7 新功能 centos7
1.修改hostname

Centos 7中修改hostname的方法与老版本截然不同,下面是修改命令:
hostnamectl set-hostname 你要修改到的hostname


2.修改语言

用下面命令:
localectl set-locale LANG=要修改的语言

3.修改CentOS 7的网卡名称:

输入如下命令,进入对应目录,编辑文件:
vim /etc/sysconfig/grub
然后,往这个文件中添加“net.ifnames=0 biosdevname=0”内容,
eg:

GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto vconsole.keymap=us rhgb quiet net.ifnames=0 biosdevname=0 "
加在未尾

grub2-mkconfig -o /boot/grub2/grub.cfg

然后修改 /etc/sysconfig/network-scripts/ifcfg-xx文件名
然后,重启系统后查看网卡名称

4.ssh免密码登陆
a.在A机生成证书
在A机root用户下执行ssh-keygen命令,在需要输入的地方,直接回车,生成建立安全信任关系的证书。
# ssh-keygen  -t  rsa

b.# scp -r id_rsa.pub B主机ip:/root/.ssh/authorized_keys
a就可以免密码登陆到B了

5.CentOS 7.0默认使用的是firewall作为防火墙,这里改为iptables防火墙。
firewall:systemctl start firewalld.service#启动firewall
systemctl stop firewalld.service#停止firewall
systemctl disable firewalld.service#禁止firewall开机启动

systemctl  status network #想看网络状态





CentOS 7 在服务和网络方面的一点变化,先前很多烂熟于心的操作指令已经不适用了,不管是否习惯,总要接受、熟悉这些变化。
写上篇的时候还没有最小安装的ISO(CentOS-7.0-1406-x86_64-Minimal.iso),后来安装了首先发现ifconfig、netstat、route、arp都没有了,在哪儿呢?
[root@centos7 ~]# yum search ifconfig
......
======================== Matched: ifconfig =========================
net-tools.x86_64 : Basic networking tools
[root@centos7 ~]# 
 哦,最小安装默认没有包含这些老工具,如果非用不可,就 yum install net-tools 吧,但我就是要看看不用它们我们怎么来管理网络。
我们将要用到的是ip指令,ss指令和NetworkManager的两个工具 nmtui,nmcli。老实说,这几个工具更加强大了,但还真不太容易习惯呢。

一、ip ss指令替代 ifconfig route arp netstat

1、ip 指令入门
ip [ OPTIONS ] OBJECT { COMMAND | help }  
OBJECT 和 COMMAND可以简写到一个字母
ip help          可以查到OBJECT列表和OPTIONS,简写 ip h
ip <OBJECT> help   查看针对该OBJECT的帮助,比如 ip addr help,简写 ip a h
ip addr          查看网络接口地址,简写 ip a

查看网络接口地址,替代ifconfig:
[root@centos7 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:15:35:d2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.150.110/24 brd 192.168.150.255 scope global enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe15:35d2/64 scope link
       valid_lft forever preferred_lft forever
[root@centos7 ~]# 

网络接口统计信息
[root@centos7 ~]# ip -s link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns
    0          0        0       0       0       0      
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 08:00:27:15:35:d2 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    8135366    131454   0       0       0       456    
    TX: bytes  packets  errors  dropped carrier collsns
    646297     2441     0       0       0       0    

2、ip route显示和设定路由
显示路由表
[root@centos7 ~]# ip route
default via 192.168.150.254 dev enp0s3  proto static  metric 1024
192.168.150.0/24 dev enp0s3  proto kernel  scope link  src 192.168.150.110 
太难看了,格式化一下(显示的是默认网关和局域网路由,两行的内容没有共通性):
[root@centos7 tmp]# ip route|column -t
default           via  192.168.150.254  dev    enp0s3  proto  static  metric  1024
192.168.150.0/24  dev  enp0s3           proto  kernel  scope  link    src     192.168.150.110

添加静态路由
[root@centos7 ~]# ip route add 10.15.150.0/24 via 192.168.150.253 dev enp0s3
[root@centos7 ~]#
[root@centos7 ~]# ip route|column -t
default           via  192.168.150.254  dev    enp0s3  proto  static  metric  1024
10.15.150.0/24    via  192.168.150.253  dev    enp0s3  proto  static  metric  1
192.168.150.0/24  dev  enp0s3           proto  kernel  scope  link    src     192.168.150.110
[root@centos7 ~]#
[root@centos7 ~]# ping 10.15.150.1
PING 10.15.150.1 (10.15.150.1) 56(84) bytes of data.
64 bytes from 10.15.150.1: icmp_seq=1 ttl=63 time=1.77 ms
64 bytes from 10.15.150.1: icmp_seq=1 ttl=63 time=1.08 ms
64 bytes from 10.15.150.1: icmp_seq=1 ttl=63 time=1.57 ms
^C

删除静态路由只需要把 add 替换成 del,或者更简单的只写目标网络
[root@centos7 ~]# ip route del 10.15.150.0/24
[root@centos7 ~]# 
 
但是,ip route 指令对路由的修改不能保存,重启就没了。
设置永久的静态路由的方法RHEL官网文档讲了几种,试验成功的只有一种:
[root@centos7 ~]#echo "10.15.150.0/24 via 192.168.150.253 dev enp0s3" > /etc/sysconfig/network-scripts/route-enp0s3
重启计算机,或者禁用再启用设备enp0s3才能生效,
注意:/etc/sysconfig/static-routes,/etc/sysconfig/network 配置文件都不好用。

3、用 ip neighbor 代替 arp -n
[root@centos7 ~]# ip nei
192.168.150.254 dev enp0s3 lladdr b8:a3:86:37:bd:f8 STALE
192.168.150.100 dev enp0s3 lladdr 90:b1:1c:94:a1:20 DELAY
192.168.150.253 dev enp0s3 lladdr 00:09:0f:85:86:b9 STALE

4、用ss 代替 netstat
对应netstat -ant
[root@centos7 ~]# ss -ant
State       Recv-Q Send-Q   Local Address:Port     Peer Address:Port
LISTEN      0      100          127.0.0.1:25                  *:*     
LISTEN      0      128                  *:22                  *:*     
ESTAB       0      0      192.168.150.110:22    192.168.150.100:53233
LISTEN      0      100                ::1:25                 :::*     
LISTEN      0      128                 :::22                 :::*   
 
对应netstat -antp
[root@centos7 tmp]# ss -antp
State      Recv-Q Send-Q        Local Address:Port          Peer Address:Port
LISTEN     0      100               127.0.0.1:25                       *:*      
users:(("master",1817,13))
LISTEN     0      128                       *:22                       *:*      
users:(("sshd",1288,3))
ESTAB      0      0           192.168.150.110:22         192.168.150.100:59413  
users:(("sshd",2299,3))
LISTEN     0      100                     ::1:25                      :::*      
users:(("master",1817,14))
LISTEN     0      128                      :::22                      :::*      
users:(("sshd",1288,4))
[root@centos7 tmp]#
看着真的很别扭,不管多宽的终端屏,users:部分都会折到下一行,其实是在一行的。

格式化一下,内容整齐了,但是标题行串了:
[root@centos7 tmp]# ss -antp|column -t
State   Recv-Q  Send-Q  Local               Address:Port           Peer                        Address:Port
LISTEN  0       100     127.0.0.1:25        *:*                    users:(("master",1817,13))
LISTEN  0       128     *:22                *:*                    users:(("sshd",1288,3))
ESTAB   0       0       192.168.150.110:22  192.168.150.100:59413  users:(("sshd",2299,3))
LISTEN  0       100     ::1:25              :::*                   users:(("master",1817,14))
LISTEN  0       128     :::22               :::*                   users:(("sshd",1288,4))


5、旧的network脚本和ifcfg文件
Centos7 开始,网络由 NetworkManager 服务负责管理,相对于旧的 /etc/init.d/network 脚本,NetworkManager是动态的、事件驱动的网络管理服务。旧的 /etc/init.d/network 以及 ifup,ifdown 等依然存在,但是处于备用状态,即:NetworkManager运行时,多数情况下这些脚本会调用NetworkManager去完成网络配置任务;NetworkManager么有运行时,这些脚本就按照老传统管理网络。
[root@centos7 ~]# /etc/init.d/network start
Starting network (via systemctl):                          [  OK  ]
注意(via systemctl)。


6、网络配置文件:
/etc/sysconfig/network   说是全局设置,默认里面啥也没有
/etc/hostname            用nmtui修改hostname后,主机名保存在这里
/etc/resolv.conf         保存DNS设置,不需要手工改,nmtui里面设置的DNS会出现在这里
/etc/sysconfig/network-scripts/            连接配置信息 ifcfg 文件
/etc/NetworkManager/system-connections/    VPN、移动宽带、PPPoE连接
zookeeper 示例代码
package com.zaq.zk;

import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooDefs.Ids;
import org.apache.zookeeper.ZooKeeper;

public class Test {
	public static void main(String[] args) throws Exception {
		// 创建一个与服务器的连接
		 ZooKeeper zk = new ZooKeeper("zaqzaq91:2181,zaqzaq92:2181,zaqzaq69:2181" , 
		        20000, new Watcher() { 
		            // 监控所有被触发的事件
		            public void process(WatchedEvent event) { 
		                System.out.println(event.getPath()+"已经触发了" + event.getType() + "事件!"); 
		            } 
		        }); 
		 // 创建一个目录节点
		 zk.create("/testRootPath", "testRootData".getBytes(), Ids.OPEN_ACL_UNSAFE,
		   CreateMode.PERSISTENT); 
		 // 创建一个子目录节点
		 zk.create("/testRootPath/testChildPathOne", "testChildDataOne".getBytes(),
		   Ids.OPEN_ACL_UNSAFE,CreateMode.PERSISTENT); 
		 System.out.println(new String(zk.getData("/testRootPath",false,null))); 
		 // 取出子目录节点列表
		 System.out.println(zk.getChildren("/testRootPath",true)); 
		 // 修改子目录节点数据
		 zk.setData("/testRootPath/testChildPathOne","modifyChildDataOne".getBytes(),-1); 
		 System.out.println("目录节点状态:["+zk.exists("/testRootPath",true)+"]"); 
		 System.out.println("目录节点状态:["+zk.exists("/testRootPathNN",true)+"]"); 
		 // 创建另外一个子目录节点
		 zk.create("/testRootPath/testChildPathTwo", "testChildDataTwo".getBytes(), 
		   Ids.OPEN_ACL_UNSAFE,CreateMode.PERSISTENT); 
		 System.out.println(new String(zk.getData("/testRootPath/testChildPathTwo",true,null))); 
		
		 
		 
			 Thread.sleep(1000*4);
		 
		 
		 // 删除子目录节点
		 zk.delete("/testRootPath/testChildPathTwo",-1); 
		 zk.delete("/testRootPath/testChildPathOne",-1); 
		 // 删除父目录节点
		 zk.delete("/testRootPath",-1); 
		 // 关闭连接
		 zk.close();
	}
}


package com.zaq.zk;

import java.util.Collections;

import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooKeeper;
import org.apache.zookeeper.ZooDefs.Perms;
import org.apache.zookeeper.data.ACL;
import org.apache.zookeeper.data.Id;
import org.apache.zookeeper.server.auth.DigestAuthenticationProvider;

public class Test2 {
	public static void main(String[] args) throws Exception {
		// 创建一个与服务器的连接
		 ZooKeeper zk = new ZooKeeper("zaqzaq1:2181,zaqzaq2:2181,zaqzaq5:2181" , 
		        20000, new Watcher() { 
		            // 监控所有被触发的事件
		            public void process(WatchedEvent event) { 
		                System.out.println(event.getPath()+"已经触发了" + event.getType() + "事件!"); 
		            } 
		        }); 
		 zk.addAuthInfo("digest", "zaqzaq1:zaqzaq1".getBytes()); 
		 // 创建一个目录节点
		 System.out.println(new String(zk.getData("/pass",false,null))); 
		 zk.setACL("/dubbo", Collections.singletonList(new ACL(Perms.ALL,new Id("digest", DigestAuthenticationProvider.generateDigest("zaqzaq:zaqzaq")))), -1);
	} 
}
Zookeeper 安装和配置 zookeeper Zookeeper 安装和配置
Zookeeper的安装和配置十分简单, 既可以配置成单机模式, 也可以配置成集群模式. 下面将分别进行介绍.

单机模式
点击这里下载zookeeper的安装包之后, 解压到合适目录. 进入zookeeper目录下的conf子目录, 创建zoo.cfg:

Bash代码  收藏代码
tickTime=2000    
dataDir=/Users/apple/zookeeper/data    
dataLogDir=/Users/apple/zookeeper/logs    
clientPort=4180   
参数说明:

tickTime: zookeeper中使用的基本时间单位, 毫秒值.
dataDir: 数据目录. 可以是任意目录.
dataLogDir: log目录, 同样可以是任意目录. 如果没有设置该参数, 将使用和dataDir相同的设置.
clientPort: 监听client连接的端口号.
至此, zookeeper的单机模式已经配置好了. 启动server只需运行脚本:

Bash代码  收藏代码
bin/zkServer.sh start  
 Server启动之后, 就可以启动client连接server了, 执行脚本:
Bash代码  收藏代码
bin/zkCli.sh -server localhost:4180  
 
伪集群模式
所谓伪集群, 是指在单台机器中启动多个zookeeper进程, 并组成一个集群. 以启动3个zookeeper进程为例.

将zookeeper的目录拷贝2份:

Bash代码  收藏代码
|--zookeeper0  
|--zookeeper1  
|--zookeeper2  
 更改zookeeper0/conf/zoo.cfg文件为:

Bash代码  收藏代码
tickTime=2000    
initLimit=5    
syncLimit=2    
dataDir=/Users/apple/zookeeper0/data    
dataLogDir=/Users/apple/zookeeper0/logs    
clientPort=4180  
server.0=127.0.0.1:8880:7770    
server.1=127.0.0.1:8881:7771    
server.2=127.0.0.1:8882:7772  
新增了几个参数, 其含义如下:

initLimit: zookeeper集群中的包含多台server, 其中一台为leader, 集群中其余的server为follower. initLimit参数配置初始化连接时, follower和leader之间的最长心跳时间. 此时该参数设置为5, 说明时间限制为5倍tickTime, 即5*2000=10000ms=10s.
syncLimit: 该参数配置leader和follower之间发送消息, 请求和应答的最大时间长度. 此时该参数设置为2, 说明时间限制为2倍tickTime, 即4000ms.
server.X=A:B:C 其中X是一个数字, 表示这是第几号server. A是该server所在的IP地址. B配置该server和集群中的leader交换消息所使用的端口. C配置选举leader时所使用的端口. 由于配置的是伪集群模式, 所以各个server的B, C参数必须不同.
参照zookeeper0/conf/zoo.cfg, 配置zookeeper1/conf/zoo.cfg, 和zookeeper2/conf/zoo.cfg文件. 只需更改dataDir, dataLogDir, clientPort参数即可.

在之前设置的dataDir中新建myid文件, 写入一个数字, 该数字表示这是第几号server. 该数字必须和zoo.cfg文件中的server.X中的X一一对应.
/Users/apple/zookeeper0/data/myid文件中写入0, /Users/apple/zookeeper1/data/myid文件中写入1, /Users/apple/zookeeper2/data/myid文件中写入2.

分别进入/Users/apple/zookeeper0/bin, /Users/apple/zookeeper1/bin, /Users/apple/zookeeper2/bin三个目录, 启动server.
任意选择一个server目录, 启动客户端:

Bash代码  收藏代码
bin/zkCli.sh -server localhost:4180  
 
集群模式
集群模式的配置和伪集群基本一致.
由于集群模式下, 各server部署在不同的机器上, 因此各server的conf/zoo.cfg文件可以完全一样.
下面是一个示例:

Bash代码  收藏代码
tickTime=2000    
initLimit=5    
syncLimit=2    
dataDir=/home/zookeeper/data    
dataLogDir=/home/zookeeper/logs    
clientPort=4180  
server.1=zaqzaq1:2888:3888  
server.2=zaqzaq2:2888:3888    
server.3=zaqzaq3:2888:3888  
示例中部署了3台zookeeper server, 分别部署在10.10.8.1, 10.10.8.2, 10.10.8.3上. 需要注意的是, 各server的dataDir目录下的myid文件中的数字必须不同.

10.10.8.1 server的myid为1, 10.10.8.1 server的myid为2, zaqzaq3 server的myid为3.

为了使用observer模式,在任何想变成observer模式的配置文件中加入如下配置:

peerType=observer
并在所有server的配置文件中,配置成observer模式的server的那行配置追加:observer,例如:

server.1=zaqzaq1:2181:3181:observer


zookeeper超级管理员权限配置》
修改Zookeeper的启动脚本zkServer.sh,在start附件(100行左右)加入以下配置:
-Dzookeeper.DigestAuthenticationProvider.superDigest=zaqzaq:T4iBhLeZHWvpN1PYVzIF3yV9sO0=
mysql-cluster 7.3.5-linux 安装 mysql, linux, cluster mysql-cluster 7.3.5-linux 安装
先安装 
#yum install -y perl
#yum install -y perl-Module-Install.noarch  
#yum install libaio
【集群环境】

管理节点    10.0.0.19
数据节点    10.0.0.12
                   10.0.0.17
sql节点       10.0.0.18
                   10.0.0.22

1. 添加mysql用户
[plain] view plaincopyprint?
# groupadd mysql  
# useradd mysql -g mysql  

2. 安装mysql-cluster 7.3.5-linux 
[plain] view plaincopyprint?
# cd /usr/local/src/(已下载好集群版)  
# tar -xvf mysql-cluster-gpl-7.3.5-linux-glibc2.5-x86_64.tar.gz   
# mv mysql-cluster-gpl-7.3.5-linux-glibc2.5-x86_64 ../mysql  
# cd ..  
# chown -R mysql:mysql mysql/  
# cd mysql  
# scripts/mysql_install_db --user=mysql   

以上步骤5台机器都要执行
3. 集群配置
(1) 管理节点

[plain] view plaincopyprint?
# vi  /var/lib/mysql-cluster/config.ini (目录和文件没有请新建,添加以下内容)  

[plain] view plaincopyprint?
[NDBD DEFAULT]  
NoOfReplicas=2  
[TCP DEFAULT]  
portnumber=3306  
  
[NDB_MGMD]  
#设置管理节点服务器  
nodeid=1  
HostName=10.0.0.19  
DataDir=/var/mysql/data  
  
[NDBD]  
id=2  
HostName=10.0.0.12  
DataDir=/var/mysql/data  
  
[NDBD]  
id=3  
HostName=10.0.0.17  
DataDir=/var/mysql/data  
  
[MYSQLD]  
id=4  
HostName=10.0.0.18  
[MYSQLD]  
id=5  
HostName=10.0.0.22  
  
#必须有空的mysqld节点,不然数据节点断开后启动有报错  
[MYSQLD]  
id=6  
[mysqld]  
id=7  

拷贝ndb_mgm、ndb_mgmd到bin目录。
[plain] view plaincopyprint?
# cd /usr/local/mysql/bin  
# cp ./ndb_mgm /usr/local/bin/  
# cp ./ndb_mgmd /usr/local/bin/  

备注:    管理节点只要ndb_mgm和ndb_mgmd两个文件和一个配置文件即可。
                因此把这三个文件复制到那里,那里就是管理节点了。
                ndb_mgmd是mysql cluster管理服务器,ndb_mgm是客户端管理工具。

启动管理节点
[plain] view plaincopyprint?
# ndb_mgmd -f /var/lib/mysql-cluster/config.ini  
关闭管理节点
[plain] view plaincopyprint?
# ndb_mgm>shutdown  
备注:命令行中的ndb_mgmd是mysql cluster的管理服务器,后面的-f表示后面的参数是启动的参数配置文件。
如果在启动后过了几天又添加了一个数据节点,这时修改了配置文件启动时就必须加上--initial参数,不然添加的节点不会作用在mysql cluster中。
[plain] view plaincopyprint?
# ndb_mgmd -f /var/lib/mysql-cluster/config.ini --initial  

[plain] view plaincopyprint?
# ndb_mgm  
这时就进入到客户端,可以对mysql cluster进行各项操作。

[plain] view plaincopyprint?
# ndb_mgm> show 查看各节点情况。  
  
# ndb_mgm> all report memory 查看各数据节点使用情况  
  
# ndb_mgm>create nodegroup 3;创建数据节点分组  
  
# mysql> alter online table data_house reorganize partition; 调整分区数据  

(2) 数据节点

[plain] view plaincopyprint?
# vi /etc/my.cnf (添加以下内容)  

[plain] view plaincopyprint?
[mysqld]  
datadir=/var/mysql/data  
socket=/var/mysql/mysql.sock  
user=mysql  
# Disabling symbolic-links is recommended to prevent assorted security risks  
symbolic-links=0  
  
#运行NDB存储引擎  
ndbcluster   
#指定管理节点  
ndb-connectstring=10.0.0.19  
  
[MYSQL_CLUSTER]  
ndb-connectstring=10.0.0.19  
[NDB_MGM]  
connect-string=10.0.0.19  
  
[mysqld_safe]  
log-error=/var/mysql/log/mysqld.log  
pid-file=/var/run/mysqld/mysqld.pid  

安装后第一次启动数据节点时要加上--initial参数,其它时候不要加,除非是在备份、恢复或配置变化后重启时
[plain] view plaincopyprint?
# /usr/local/mysql/bin/ndbd --initial   
  
正常启动  
# /usr/local/mysql/bin/ndbd  
(3) sql节点
[plain] view plaincopyprint?
# cd /usr/local/mysql/  
设置mysql服务为开机自启动
[plain] view plaincopyprint?
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld  
# chmod +x /etc/rc.d/init.d/mysqld  
# chkconfig --add mysqld  
[plain] view plaincopyprint?
# vi /etc/my.cnf (添加以下内容)  
[plain] view plaincopyprint?
[mysqld]  
server-id=4#每个服务器的id不一样  
datadir=/var/mysql/data  
socket=/var/mysql/mysql.sock  
user=mysql  
# Disabling symbolic-links is recommended to prevent assorted security risks  
symbolic-links=0  
#log-bin = /var/mysql/log/mysql-bin.log  
max_connections=1000  
  
#以下为mysql 主主模式的配置文件  
# 忽略mysql数据库复制  
binlog-ignore-db=mysql  
# 每次增长2  
auto-increment-increment = 2  
# 设置自动增长的字段的偏移量,即初始值为2  
auto-increment-offset = 1  
  
[mysqld_safe]  
#log-error=/var/mysql/log/mysqld.log  
#pid-file=/var/run/mysqld/mysqld.pid  
  
[MYSQLD]  
ndbcluster  
ndb-connectstring=10.0.0.19  
[MYSQL_CLUSTER]  
ndb-connectstring=10.0.0.19  
[NDB_MGM]  
connect-string=10.0.0.19  
[plain] view plaincopyprint?
#service mysqld start  

登陆bin/mysql 错误解决办法:    1. can't connect to local MySQL server through socket '/tmp/mysql.sock'
                               ln -s /var/mysql/mysql.sock /tmp/mysql.sock

mysql集群的启动顺序为:管理节点->数据节点->SQL节点
mysql集群的关闭顺序为,管理节点->数据节点->SQL节点


NoOfReplicas简单解释
节点a,b
数据1,2

假如NoOfReplicas=1

则1存在a上,2存在b上
只要a或者b down掉,数据就不完整了,少了我们的数据1或者2

假如NoOfReplicas=2
则1存在a上,2的一个备份也存在a上
2存在b上,1的一个备份也存在b上
任何一个a或者b down掉,数据都是完整的

NoOfReplicas=2 为2个ndb节点为一个组
create nodegroup 6,7 将id为6和7的ndb作为新的一个组,当组内一台机器宕机,不会影响整个集群
NoOfReplicas修改刚ndb的启动需要加上--initial且数据全清空(warn)

防火墙一定得关,fuck,不知何时又自动启动了,莫名其妙的错错逗死你的

设置字符编码:
character-set-server=utf8#这是设置服务器使用的字符集
character-set-client=utf8#这是设置客户端发送查询使用的字符集
character-set-connection=utf8#这是设置服务器需要将收到的查询串转换成的字符集
character-set-results=utf8#这是设置服务器要将结果数据转换到的字符集,转换后才发送给客户端

lower_case_table_names=1 #忽略大小写
sqld节点:
设置InnoDB为默认引擎:在配置文件/etc/my.cnf中下面加入default-storage-engine=ndbcluster
Tomcat Ajax 跨域实现 可以设置安全策略 指定站点跨域 tomcat Tomcat Ajax 跨域实现 可以设置安全策略 指定站点跨域
下载cors-filter-1.7.jar,java-property-utils-1.9.jar这两个库文件,放到lib目录下。(可在http://search.maven.org上查询并下载。)工程项目中web.xml中的配置如下:  

<param-name>cors.allowOrigin</param-name>

<param-value>这里可以不写* 可以指定站点域名 http://192.168.1.102</param-value>

<!-- 实现跨域 -->
    <filter>
        <filter-name>CORS</filter-name>
        <filter-class>com.thetransactioncompany.cors.CORSFilter</filter-class>
        <init-param>
            <param-name>cors.allowOrigin</param-name>
            <param-value>*</param-value>
        </init-param>
        <init-param>
            <param-name>cors.supportedMethods</param-name>
            <param-value>GET, POST, HEAD, PUT, DELETE</param-value>
        </init-param>
        <init-param>
            <param-name>cors.supportedHeaders</param-name>
            <param-value>Accept, Origin, X-Requested-With, Content-Type, Last-Modified</param-value>
        </init-param>
        <init-param>
            <param-name>cors.exposedHeaders</param-name>
            <param-value>Set-Cookie</param-value>
        </init-param>
        <init-param>
            <param-name>cors.supportsCredentials</param-name>
            <param-value>true</param-value>
        </init-param>
    </filter>
    <filter-mapping>
        <filter-name>CORS</filter-name>
        <url-pattern>/*</url-pattern>
    </filter-mapping>
mysql命令 mysql http://www.cnblogs.com/zhangzhu/archive/2013/07/04/3172486.html
1、连接Mysql
格式: mysql -h主机地址 -u用户名 -p用户密码

1、连接到本机上的MYSQL。
首先打开DOS窗口,然后进入目录mysql\bin,再键入命令mysql -u root -p,回车后提示你输密码.注意用户名前可以有空格也可以没有空格,但是密码前必须没有空格,否则让你重新输入密码。

如果刚安装好MYSQL,超级用户root是没有密码的,故直接回车即可进入到MYSQL中了,MYSQL的提示符是: mysql>

2、连接到远程主机上的MYSQL。假设远程主机的IP为:110.110.110.110,用户名为root,密码为abcd123。则键入以下命令:
    mysql -h110.110.110.110 -u root -p 123;(注:u与root之间可以不用加空格,其它也一样)

3、退出MYSQL命令: exit (回车)
 
2、修改密码
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('newpass');
格式:mysqladmin -u用户名 -p旧密码 password 新密码

1、给root加个密码ab12。
首先在DOS下进入目录mysql\bin,然后键入以下命令
    mysqladmin -u root -password ab12
注:因为开始时root没有密码,所以-p旧密码一项就可以省略了。

2、再将root的密码改为djg345。
    mysqladmin -u root -p ab12 password djg345
3、增加新用户
注意:和上面不同,下面的因为是MYSQL环境中的命令,所以后面都带一个分号作为命令结束符

格式:grant select on db.* to zaqzaq@10.10.15.109 identified by 'zaqzaq';

grant all on *.* to zaqzaq@% identified by 'zaqzaq'
1、增加一个用户test1密码为abc,让他可以在任何主机上登录,并对所有数据库有查询、插入、修改、删除的权限。首先用root用户连入MYSQL,然后键入以下命令:
    grant select,insert,update,delete on *.* to [email=test1@”%]test1@”%[/email]” Identified by “abc”;

但增加的用户是十分危险的,你想如某个人知道test1的密码,那么他就可以在internet上的任何一台电脑上登录你的mysql数据库并对你的数据可以为所欲为了,解决办法见2。

2、增加一个用户test2密码为abc,让他只可以在localhost上登录,并可以对数据库mydb进行查询、插入、修改、删除的操作(localhost指本地主机,即MYSQL数据库所在的那台主机),这样用户即使用知道test2的密码,他也无法从internet上直接访问数据库,只能通过MYSQL主机上的web页来访问了。
    grant select,insert,update,delete on mydb.* to [email=test2@localhost]test2@localhost[/email] identified by “abc”;

如果你不想test2有密码,可以再打一个命令将密码消掉。
    grant select,insert,update,delete on mydb.* to [email=test2@localhost]test2@localhost[/email] identified by “”;
 
4.1 创建数据库
注意:创建数据库之前要先连接Mysql服务器

命令:create database <数据库名>

例1:建立一个名为xhkdb的数据库
   mysql> create database xhkdb;

例2:创建数据库并分配用户

①CREATE DATABASE 数据库名;

②GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON 数据库名.* TO 数据库名@localhost IDENTIFIED BY '密码';

③SET PASSWORD FOR '数据库名'@'localhost' = OLD_PASSWORD('密码');

依次执行3个命令完成数据库创建。注意:中文 “密码”和“数据库”是户自己需要设置的。
4.2 显示数据库
命令:show databases (注意:最后有个s)
mysql> show databases;

注意:为了不再显示的时候乱码,要修改数据库默认编码。以下以GBK编码页面为例进行说明:

1、修改MYSQL的配置文件:my.ini里面修改default-character-set=gbk
2、代码运行时修改:
   ①Java代码:jdbc:mysql://localhost:3306/test?useUnicode=true&characterEncoding=gbk
   ②PHP代码:header("Content-Type:text/html;charset=gb2312");
   ③C语言代码:int mysql_set_character_set( MYSQL * mysql, char * csname);
该函数用于为当前连接设置默认的字符集。字符串csname指定了1个有效的字符集名称。连接校对成为字符集的默认校对。该函数的工作方式与SET NAMES语句类似,但它还能设置mysql- > charset的值,从而影响了由mysql_real_escape_string() 设置的字符集。
4.3 删除数据库
命令:drop database <数据库名>
例如:删除名为 xhkdb的数据库
mysql> drop database xhkdb;

例子1:删除一个已经确定存在的数据库
   mysql> drop database drop_database;
   Query OK, 0 rows affected (0.00 sec)

例子2:删除一个不确定存在的数据库
   mysql> drop database drop_database;
   ERROR 1008 (HY000): Can't drop database 'drop_database'; database doesn't exist
      //发生错误,不能删除'drop_database'数据库,该数据库不存在。
   mysql> drop database if exists drop_database;
   Query OK, 0 rows affected, 1 warning (0.00 sec)//产生一个警告说明此数据库不存在
   mysql> create database drop_database;
   Query OK, 1 row affected (0.00 sec)
   mysql> drop database if exists drop_database;//if exists 判断数据库是否存在,不存在也不产生错误
   Query OK, 0 rows affected (0.00 sec)
ERROR 2002 (HY000) mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock'
错误提示:

root@localhost ~]# mysql --socket=/tmp/mysql.sock
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)

解决方法:

由于mysql 默认的mysql.sock 是在/var/lib/mysql/mysql.sock,但linux系统总是去/tmp/mysql.sock查找,所以会报错
[root@localhost ~]# find / -name mysql.sock
/var/lib/mysql/mysql.sock
 
1.直接指定mysql通道
 
[root@localhost ~]# mysql --socket=/var/lib/mysql/mysql.sock
Welcome to the MySQL monitor.  Commands end with ; or /g.
Your MySQL connection id is 2 to server version: 5.0.22
Type 'help;' or '/h' for help. Type '/c' to clear the buffer.
mysql>
 
2. 创建符号连接:
 
为mysql.sock增加软连接(相当于windows中的快捷方式)。
ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock

eg:
root@localhost ~]# mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
[root@localhost ~]# ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock
[root@localhost ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or /g.
Your MySQL connection id is 3 to server version: 5.0.22
Type 'help;' or '/h' for help. Type '/c' to clear the buffer.
mysql>
tomcat session 复制 tomcat session
用tomcat做负载集群时, 经常会用到session复制(Session Replication), 很多例子会告诉我们要配置apache或者其他的Web Server. 而事实上, 单纯从session复制的角度讲, 是不需要Web Server的.
 
tomcat的session复制分为两种, 一种是全局试的(all-to-all), 这意味着一个node(tomcat实例)的session发生变化之后, 它会将这些变更复制到其他所有集群组的成员;另一种是局部试的, 它会用到BackupManager, BackupManager能实现只复制给一个Buckup Node, 并且这个Node会部署相同的Web应用, 但是这种方式并没用经过很多的测试(来自官方说明..).
 
tomcat的session复制是基于IP组播(multicast)来完成的, 详细的IP组播介绍可以参考这里.
简单的说就是需要进行集群的tomcat通过配置统一的组播IP和端口来确定一个集群组, 当一个node的session发生变更的时候, 它会向IP组播发送变更的数据, IP组播会将数据分发给所有组里的其他成员(node).
 http://tomcat.apache.org/tomcat-8.0-doc/cluster-howto.html
tomcat7配置如下(这里所有的tomcat都在不同的物理主机, 如果在同一台主机上需要改tomcat的tcpListenPort)
  <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1">
Xml代码  收藏代码
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"  
                 channelSendOptions="8">  
  
          <Manager className="org.apache.catalina.ha.session.DeltaManager"  
                   expireSessionsOnShutdown="false"  
                   notifyListenersOnReplication="true"/>  
  
          <Channel className="org.apache.catalina.tribes.group.GroupChannel">  
            <Membership className="org.apache.catalina.tribes.membership.McastService"  
                        address="228.0.0.4"  
                        port="45564"  
                        frequency="500"  
                        dropTime="3000"/>  
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"  
                      address="auto"  
                      port="4000"  
                      autoBind="100"  
                      selectorTimeout="5000"  
                      maxThreads="6"/>  
  
            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">  
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>  
            </Sender>  
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>  
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>  
          </Channel>  
  
          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"  
                 filter=""/>  
          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>  
  
          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"  
                    tempDir="/tmp/war-temp/"  
                    deployDir="/tmp/war-deploy/"  
                    watchDir="/tmp/war-listen/"  
                    watchEnabled="false"/>  
  
          <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>  
          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>  
        </Cluster>  
 
然后新建一个web应用, 我们这里叫TomcatClusterDemo, web context与名称一致.
新建一个jsp, 这个jsp复制向session里创建/更新属性, 并将session里的所有属性显示在网页上. 
当然, 为了验证我们的session是同步的, 我们还将session ID显示了出来, 代码如下:
 
 
Jsp代码  收藏代码
  <%@ page contentType="text/html; charset=UTF-8" %>  
<%@ page import="java.util.*" %>  
<html><head><title>Tomcat Cluster Demo</title></head>  
<body>  
Server Info:  
<%  
out.println(request.getLocalAddr() + " : " + request.getLocalPort()+"<br>");%>  
<%  
  out.println("<br> ID " + session.getId()+"<br>");  
    
  String dataName = request.getParameter("dataName");  
  if (dataName != null && dataName.length() > 0) {  
     String dataValue = request.getParameter("dataValue");  
     session.setAttribute(dataName, dataValue);  
     System.out.println("application:" + application.getAttribute(dataName));  
     application.setAttribute(dataName, dataValue);  
  }  
  out.print("<b>Session List</b>");  
  Enumeration<String> e = session.getAttributeNames();  
  while (e.hasMoreElements()) {  
     String name = e.nextElement();  
     String value = session.getAttribute(name).toString();  
     out.println( name + " = " + value+"<br>");  
         System.out.println( name + " = " + value);  
   }  
%>  
  <form action="test.jsp" method="POST">  
    Name:<input type=text size=20 name="dataName">  
     <br>  
    Value:<input type=text size=20 name="dataValue">  
     <br>  
    <input type=submit>  
   </form>  
</body>  
</html>  
 
同时, 在web.xml里增加<distributable/>描述
 
 
Xml代码  收藏代码
<?xml version="1.0" encoding="UTF-8"?>  
<web-app id="WebApp_ID" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">  
    <display-name>TomcatClusterDemo</display-name>  
    <distributable/>  
    <welcome-file-list>  
        <welcome-file>index.html</welcome-file>  
        <welcome-file>index.htm</welcome-file>  
        <welcome-file>index.jsp</welcome-file>  
        <welcome-file>default.html</welcome-file>  
        <welcome-file>default.htm</welcome-file>  
        <welcome-file>test.jsp</welcome-file>  
    </welcome-file-list>  
</web-app>  
 
现在将TomcatClusterDemo部署到两个tomcat上(直接将war包或者部署文件拷贝到webapps下, 或者通过Tomcat Web Application Manager部署), 依次启动两个tomcat.
 
先访问第一台tomcat(下面的9.119.84.68)的test.jsp, 并添加几个session.
然后访问第二台tomcat(下面的9.119.84.88)的test.jsp, 为了告诉tomcat我们要用那个session访问, 我们需要在URL后面加上 ;jsessionid=SESSION_ID 
SESSION_ID可以从第一个test.jsp页面上获得.
 
看看效果, 两个在不同server上的tomcat实现了session复制!

alert:
1 :web.xml里增加<distributable/>描述
2 :如果使用spring-security 刚用户登陆时得使用Rememberme的方式记住密码以达到session共享的目的
keepalived安装配置 http http://www.tuicool.com/articles/2qAjUf
cd /usr/local

1、下载安装包并解压

 sudo wget http://www.keepalived.org/software/keepalived-1.2.13.tar.gz 

tar zxvf keepalived-1.2.13.tar.gz 

2、编译安装

cd keepalived-1.2.13

./configure --prefix=/usr/local/keepalived

[如果出现configure: error:

     !!! OpenSSL is not properly installed on your system. !!!

则需要先安装openssl和openssl-devel, yum install openssl openssl-devel]

make 

sudo make install

sudo cp /usr/local/keepalived/sbin/keepalived /usr/sbin/

sudo cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

sudo cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/

mkdir /etc/keepalived

cd /etc/keepalived

sudo cp /usr/local/keepalived/etc/keepalived/keepalived.conf ./

3、将keepalived添加到开机启动服务中,并进行测试

chkconfig keepalived on

chkconfig --list | grep keepalived

sudo service keepalived restart

修改keepalived.conf文件(smtp_server 改成localhost, router_id变成NodeMaster, virtual_ipaddress 改成你自己网段内且没有被使用的如10.1.xx.xx/24格式的.

运行ip addr查看vip

运行ping命令访问vip。

4、在从服务器上进行步骤1-3

注意:router_id变成NodeBackup,priority变成99, state变成BACKUP,主从服务器要在同一个网段内。








1.安装LVS前系统需要安装popt-static,kernel-devel,make,gcc,openssl-devel,lftp,libnl*,popt*

 #yum –y install popt-static,kernel-devel,make,gcc,openssl-devel,lftp,libnl*,popt*
 #ln -s /usr/src/kernels/2.6.32-431.17.1.el6.x86_64/ /usr/src/linux
 #tar -zxvf ipvsadm-1.26.tar.gz
 #cd ipvsadm-1.26
 #make && make install
2.安装keepalived

 #wget http://www.keepalived.org/software/keepalived-1.2.13.tar.gz
 #tar –zxvf keepalived-1.2.13.tar.gz
 #cd keepalived-1.2.13
 #./configure
 #make && make install
 ######### 将keepalived做成启动服务,方便管理##########
 # cp /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/
 # cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
 # mkdir /etc/keepalived/
 # cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
 # cp /usr/local/sbin/keepalived /usr/sbin/
 # service keepalived start | stop
3.开启路由转发

1 #vi /etc/sysctl.conf
2 #sysctl –p
4.配置keepalived

1 #vi /etc/keepalived/keepalived.conf
keepalive.conf具体如下:

 ! Configuration File for keepalived
 
 global_defs {
    notification_email {
       382566697@qq.com
    }
    smtp_connect_timeout 30
    router_id LVS_MASTER #备份服务器上将MASTER改为BACKUP 
 }
 vrrp_sync_group KEEPALIVED_LVS {  
    group {  
        KEEPALIVED_LVS_MASTER
    }  
}  
   
 vrrp_instance  KEEPALIVED_LVS_MASTER  {
     state MASTER #备份服务器上将MASTER改为BACKUP 
     interface eth0  #该网卡名字需要查看具体服务器的网口
     lvs_sync_daemon_interface eth2
     virtual_router_id 51
     priority 100 # 备份服务上将100改为90
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
         192.168.0.209
          #(如果有多个VIP,继续换行填写.)
     }
 }
 
 virtual_server 192.168.0.209 80 {
     delay_loop 6   #(每隔6秒查询realserver状态)
     lb_algo rr   #(rr 算法)
     lb_kind DR      #(Direct Route)
     nat_mask 255.255.255.0
     persistence_timeout 0   #会话保存时长(秒),0表示不使用stickyness会话    
     protocol TCP    #(用TCP协议检查realserver状态)
 
     real_server 192.168.0.212 80 {
         weight 1   #(权重)
         TCP_CHECK {
             connect_timeout 10    #(10秒无响应超时)
             nb_get_retry 3
             delay_before_retry 3
             connect_port 80
         }
      }
      real_server 192.168.0.227 80 {
         weight 1
         TCP_CHECK {
             connect_timeout 10 #连接超时时间  
             nb_get_retry 3    #重试次数  
             delay_before_retry 3 #每次重试前等待延迟时间 
             connect_port 80
         }
      }
     
 }
需要注意的是{前面需要有空格,我在配置时TCP_CHECK没有空格导致无法找到real_server   eth2为机器网卡名称

RS真实服务器的脚本
#!/bin/bash
SNS_VIP=172.16.62.100

/etc/rc.d/init.d/functions

case "$1" in

start)
ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP #netmask 255.255.255.255注意
/sbin/route add -host $SNS_VIP dev lo:0
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p > /dev/null 2>&1
echo "RealServer started!"

;;

stop)
ifconfig lo:0 down
/sbin/route del $LVS_VIP > /dev/null 2>&1
echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
echo "RealServer stopped!"

;;
*)
echo "Usage: $0 {start | stop}"
exit 1

esac
exit 0





3,发送邮件的perl脚本sendmail.pl内容如下:
vrrp_sync_group KEEPALIVED_LVS {  
    group {  
        KEEPALIVED_LVS_JIM2
    }  
	notify_master /etc/keepalived/sendmail.pl
}
注意主备发送邮件的标题是不一致的,只要你能识别漂移IP在哪台服务器上即可。
#!/usr/bin/perl -w
use Net::SMTP_auth;
use strict;
my $mailhost = 'smtp.163.com';
my $mailfrom = 'test@163.com';
my @mailto   = ('123456@139.com');
my $subject  = 'keepalived up on backup';
my $text = "正文\n第二行位于此。";  
my $user   = 'test@163.com';
my $passwd = 'xxxxxxx';
&SendMail();
##############################
# Send notice mail
##############################
sub SendMail() {
    my $smtp = Net::SMTP_auth->new( $mailhost, Timeout => 120, Debug => 1 )
      or die "Error.\n";
    $smtp->auth( 'LOGIN', $user, $passwd );
    foreach my $mailto (@mailto) {
        $smtp->mail($mailfrom);
        $smtp->to($mailto);
        $smtp->data();
        $smtp->datasend("To: $mailto\n");
        $smtp->datasend("From:$mailfrom\n");
        $smtp->datasend("Subject: $subject\n");
        $smtp->datasend("\n");
        $smtp->datasend("$text\n\n"); 
        $smtp->dataend();
    }
    $smtp->quit;
}
说明:
a、由于keeplived自带的发送邮件机制是个鸡肋,如果本地不启动25端口就无法实现邮件发送,就琢磨着能不能通过自定义脚本来实现,真的是很幸运,就采用了认证的方式。
b、其他的配置说明就不详细讲了,网上很多资料。
4,测试keepalived
主备调度器都开启80端口,两台服务器上的测试内容不一致,这样更方便测试。

##########################

#所需安装模块

#use Net::SMTP

#Authen::SASL

##########################

#$stmp->auth('user','pass');

#大部分SMTP服务器为了防止 spam /垃圾邮件,就需要用户验证身份。

#此方法需要另外安装模块:Authen::SASL, 此模块可能系统不自带

##########################

#Debug => 1

#此段代码用于测试之用,所以开启了Debug,一般测试一次完毕,正式使用的话会关闭它。

注:可在命令行界面直接执行:/etc/keepalived/sendmail.pl,看看能否发送邮件成功,如果失败的话则需要安装Net::SMTP_auth模块

安装方法:

yum -y install perl-CPAN
cpan Net::SMTP_auth
java gif动态验证码 动态验证码 http://blog.csdn.net/cyanapple_wen/article/details/5429913
package zaq.test;
import java.awt.AlphaComposite;
import java.awt.Color;
import java.awt.Font;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.OutputStream;
import java.util.Random;

import zaq.test.gif.AnimatedGifEncoder;
public class Test2 {
 //定义验证码字符。去除了O和I等容易混淆的字母(也可写成)
 String s[]={"A","B","C","D","E","F","G","H","G","K","M","N","P","Q","R","S","T","U","V","W","X","Y","Z"
    ,"a","b","c","d","e","f","g","h","i","j","k","m","n","p","q","r","s","t","u","v","w","x","y","z"};
 //定义生成的验证码的宽度和高度
 int width=160;
 int height=30;
 
 public void myTest(OutputStream os)
 {
  //OutputStream os=null;
   //生成字符
   AlphaComposite ac = AlphaComposite.getInstance(AlphaComposite.SRC_OVER,1);
   AnimatedGifEncoder agf=new AnimatedGifEncoder();
   agf.start(os);
   agf.setQuality(10);
  
   agf.setDelay(100);
   agf.setRepeat(0);
   BufferedImage frame=null;
   Graphics2D teg=null;
  String rands[]=new String[6];
   for(int i=0;i<6;i++)
   {
    rands[i]=s[this.randomInt(0, s.length)];
   }
  //生成字体
   Font   font[]=new Font[6];
   for(int i=0;i<6;i++)
   {
    font[i]=this.getFont();
   }
   //生成背景颜色
   Color bgcolor=getRandColor(160, 200);
   Color linecolor=getRandColor(200, 250);
   Color fontcolor[]=new Color[6];
   Random random=new Random();
   for(int i=0;i<6;i++)
   {
    fontcolor[i]=new Color(20 + random.nextInt(110), 20 + random.nextInt(110), 20 + random.nextInt(110));
   }
   for(int i=0;i<6;i++)
   {
    frame=this.getImage(bgcolor,linecolor,fontcolor,rands,font,i);
   agf.addFrame(frame);
   frame.flush();
   }
   agf.finish();
 }
 private BufferedImage getImage(Color bgcolor,Color linecolor,Color[] fontcolor,String str[],Font[] font,int flag)
 {
  BufferedImage image = new BufferedImage(width, height,BufferedImage.TYPE_INT_RGB);
  //或得图形上下文
  Graphics2D g2d=image.createGraphics();
  //利用指定颜色填充背景
  g2d.setColor(bgcolor);
  g2d.fillRect(0, 0, width, height);
  //画背景线 4*4
  g2d.setColor(linecolor);
  for (int i = 0; i < height/4; i++) {
   g2d.drawLine(0, i*4, width, i*4);
  }
  for (int i = 0; i <width/4; i++) {
   g2d.drawLine(i*4, 0, i*4, height);
  }
  AlphaComposite ac = AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 0.2f);
     g2d.setComposite(ac);
     g2d.setFont(new Font("隶书", Font.ITALIC+Font.BOLD, 26));
    g2d.setColor(Color.red);
  g2d.drawString("jalion制作", 19, 25);
 
 
  //以下生成验证码 //透明度从0 循环到1 步长为0.2 。一共6个字母
  AlphaComposite ac3 =null;
  for(int i=0;i<str.length;i++)
  {
   g2d.setFont(font[i]);
   ac3 = AlphaComposite.getInstance(AlphaComposite.SRC_OVER, getAlpha(flag, i));
    g2d.setComposite(ac3);
   g2d.setColor(fontcolor[i]);
  
   g2d.drawString(str[i], 25 * i + 8, 25);
  }
  g2d.dispose();
  return image;
 }
 
 
 private Font getFont()
 {
  //获得随机字体;
//设置font :字体名称:Monotype Corsiva 华文彩云 方正舒体 华文行楷,隶书
 
  Random s=new Random();
  //int i=s.nextInt(10);
  int i=7;
  if(i%2==0)
  {
   return new Font("Monotype Corsiva", Font.BOLD, 28);
  }
  else if(i%3==0)
  {
   return new Font("方正舒体", Font.BOLD, 28);
  }
//  else if(i%5==0)
//  {
//   return new Font("华文行楷", Font.BOLD, 28);
//  }
  else if(i%7==0)
  {
   return new Font("隶书", Font.BOLD, 28);
  }
  else
  {
   return new Font("方正舒体", Font.BOLD, 28);
  }
 }
 
 
 
 private float getAlpha(int i,int j)
 {
  if((i+j)>5)
  {
  
   return ((i+j)*0.2f-1.2f);
  }
  else
  {
   return (i+j)*0.2f;
  }
 }
 
 private Color getRandColor(int fc, int bc) {// 给定范围获得随机颜色
  Random random = new Random();
  if (fc > 255)
   fc = 255;
  if (bc > 255)
   bc = 255;
  int r = fc + random.nextInt(bc - fc);
  int g = fc + random.nextInt(bc - fc);
  int b = fc + random.nextInt(bc - fc);
  return new Color(r, g, b);
 }
 
  private int randomInt(int from,int to){
   Random r = new Random();
   return from+r.nextInt(to-from);
  }
  public static void main(String args[]) throws FileNotFoundException
  {
   Test2 tt=new Test2();
   tt.myTest(new FileOutputStream(new File("c://1.gif")));
  }
}




import java.awt.*;
import java.io.*;
import java.util.Date;
import java.util.*;
import java.awt.image.*;


public class GifValidateCode {
    //定义验证码字符。去除了O和I等容易混淆的字母(也可写成)
    String[] s = {"A", "B", "C", "D", "E", "F", "G", "H", "G", "K","M", "N", "P", "Q", "R", "S",
      "T", "U", "V", "W","X", "Y", "Z", "a", "b", "c", "d", "e", "f", "g","h", "i", "j", 
      "k", "m", "n", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "1", "2", "3",
      "4", "5", "6", "7", "8", "9", "地", "在", "要", "工", "上", "是", "中", "国", "不", "这"};

    //定义生成的验证码的宽度和高度
    int width = 160;
    int height = 35;

    /**
     * 获得(生成)GIF验证码
     * @param os
     * @return String 系统自动生成的验证码的内容
     */
    public String getValidateCode(OutputStream os) {
        BufferedImage frame = null;
        @SuppressWarnings("unused")
  Graphics2D teg = null;
        @SuppressWarnings("unused")
  AlphaComposite ac = AlphaComposite.getInstance(AlphaComposite.SRC_OVER,1);
        AnimatedGifEncoder agf = new AnimatedGifEncoder();
        agf.start(os);
        agf.setQuality(10);
        agf.setDelay(100);
        agf.setRepeat(0);

        // 生成6个随机字符
        String rands[] = new String[6];
        for (int i = 0; i < 6; i++) {
            rands[i] = s[this.randomInt(0, s.length)];
        }

        //分别使用6种字体
        Font[] font = new Font[6];
        font[0] = new Font("Gungsuh", Font.BOLD, 24);
        font[1] = new Font("宋体", Font.BOLD, 24);
        font[2] = new Font("Times New Roman", Font.BOLD, 24);
        font[3] = new Font("隶书", Font.BOLD, 24);
        font[4] = new Font("Arial Unicode MS", Font.BOLD, 24);
        font[5] = new Font("方正舒体", Font.BOLD, 24);

        //生成各种颜色
        Color bgcolor = new Color(255,255,255);
        Color linecolor = getRandColor(200, 250);
        Color fontcolor[] = new Color[6];
        Random random = new Random();
        for (int i = 0; i < 6; i++) {
            fontcolor[i] = new Color(20 + random.nextInt(110),
              20 + random.nextInt(110), 20 + random.nextInt(110));
        }
        //生成Gif信息类
        for (int i = 0; i < font.length; i++) {
            frame = this.getImage(bgcolor, linecolor, fontcolor, rands, font, i);
            agf.addFrame(frame);
            frame.flush();
        }
       agf.finish();
       String fonts = rands[0] + rands[1]+rands[2]+rands[3]+rands[4]+rands[5];
       return fonts;
    }
    /*
     * // 随机产生150个干扰点
 g.setColor(Color.gray);
 Random rand = new Random();
 for (int i = 0; i < 150; i++) {
  g.setColor(new Color(0x123456));//设置干扰点的颜色
  int x = rand.nextInt(width);
  int y = rand.nextInt(height);
  g.drawOval(x, y, 1, 1); //可以使用干扰线等其他的方式只是调用不同的方法而已
 }
 Random rnd = new Random(new Date().getTime());
  for(int i=0;i<10;i++){      
           // g.setColor(getRandomColor(100, 155));    //随机线条的颜色
            int x1 = rnd.nextInt(width);    //线条两端坐标值  
            int y1 = rnd.nextInt(height);  
            int x2 = rnd.nextInt(width);    
            int y2 = rnd.nextInt(height);    
            g.drawLine(x1, y1, x2, y2); //画线条    
        }  
     * */
    
    //out.clear();
    //out = pageContext.pushBody();

    /**
     * 获得图片缓冲
     * @param bgcolor  背景颜色
     * @param linecolor 背景线的颜色
     * @param fontcolor  字体颜色
     * @param str
     * @param font 字体
     * @param flag
     * @return BufferedImage
     */
    private BufferedImage getImage(Color bgcolor, Color linecolor,Color[] fontcolor, 
      String[] str, Font[] font,int flag) {
        BufferedImage image = new BufferedImage(width, height,
                                                BufferedImage.TYPE_INT_RGB);
        //或得图形上下文
        Graphics2D g2d = image.createGraphics();
        //利用指定颜色填充背景
        g2d.setColor(bgcolor);
        g2d.fillRect(0, 0, width, height);
        //画背景线 3*3  (数字越大格子越密)
        g2d.setColor(linecolor);
        for (int i = 0; i < height / 3; i++)
            g2d.drawLine(0, i * 3, width, i * 3);
        for (int i = 0; i < width / 3; i++)
            g2d.drawLine(i * 3, 0, i * 3, height);
        // 画背景LOGO
        /*AlphaComposite ac = AlphaComposite.getInstance(AlphaComposite.SRC_OVER,
                0.2f);
        g2d.setComposite(ac);
        g2d.setFont(new Font("隶书", Font.ITALIC + Font.BOLD, 26));
        g2d.setColor(Color.red);
        g2d.drawString("HAHA小组", 19, 25);*/

        //以下生成验证码
        //透明度从0 循环到1 步长为0.2 。一共6个字母
        //int[] height ={23,31,27,22,25,29};
        
        int[] height = new int[6];
        
        Random rand = new Random();
        
        for(int i=0; i<6; i++){
            height[i] = rand.nextInt(15) + 20;
        }
        AlphaComposite ac3 = null;
        for (int i = 0; i < str.length; i++) {
            g2d.setFont(font[i]);
            ac3 = AlphaComposite.getInstance(AlphaComposite.SRC_OVER,
                                             getAlpha(flag, i));
            g2d.setComposite(ac3);
            g2d.setColor(fontcolor[i]);
            g2d.drawString(str[i], 25 * i + 8, height[i]);
        }
        
        // 随机产生20个干扰点
     for (int i = 0; i < 200; i++) {
      g2d.setColor(new Color(0x123456));//设置干扰点的随机颜色
      int x = rand.nextInt(width);
      int y = rand.nextInt(this.height);
      
      g2d.drawOval(x, y, 1, 1); //可以使用干扰线等其他的方式只是调用不同的方法而已
     }
     
     // 画15条干扰线
     Random rnd = new Random(new Date().getTime());
     for(int i=0;i<15;i++){ 
      
   g2d.setColor(getRandColor(100, 155));  //设置干扰线的随机颜色
         int x1 = rnd.nextInt(width);    //线条两端坐标值  
         int y1 = rnd.nextInt(this.height);  
         int x2 = rnd.nextInt(width);    
         int y2 = rnd.nextInt(this.height);    
         g2d.drawLine(x1, y1, x2, y2); //画线条    
     }  
     
        g2d.dispose();
        return image;
    }

    /**
     * 获得循环透明度,从0到1 步长为0.2
     * @param i
     * @param j
     * @return float
     */
    private float getAlpha(int i, int j) {
        if ((i + j) > 5)
            return ((i + j) * 0.2f - 1.2f);
        else 
            return (i + j) * 0.2f;        
    }

    /**
     * 获得随机颜色
     * @param fc 给定范围获得随机颜色
     * @param bc 给定范围获得随机颜色
     * @return Color
     */
    private Color getRandColor(int fc, int bc) { 
        Random random = new Random();
        if (fc > 255)
            fc = 255;
        if (bc > 255)
            bc = 255;
        int r = fc + random.nextInt(bc - fc);
        int g = fc + random.nextInt(bc - fc);
        int b = fc + random.nextInt(bc - fc);
        return new Color(r, g, b);
    }

    /**
     * 产生随机数
     * 返回[from,to)之间的一个随机整数
     * @param from 起始值
     * @param to 结束值
     * @return  [from,to)之间的一个随机整数
     */
    Random r = new Random();
    private int randomInt(int from, int to) {
        return from + r.nextInt(to - from);
    }
}

下面是4个辅助类

之一:AnimatedGifEncoder.java类
package com.accp.bookshop.util;

import java.io.*;
import java.awt.*;
import java.awt.image.*;

/**
 * Class AnimatedGifEncoder - Encodes a GIF file consisting of one or
 * more frames.
 * <pre>
 * Example:
 *    AnimatedGifEncoder e = new AnimatedGifEncoder();
 *    e.start(outputFileName);
 *    e.setDelay(1000);   // 1 frame per sec
 *    e.addFrame(image1);
 *    e.addFrame(image2);
 *    e.finish();
 * </pre>
 * No copyright asserted on the source code of this class.  May be used
 * for any purpose, however, refer to the Unisys LZW patent for restrictions
 * on use of the associated LZWEncoder class.  Please forward any corrections
 * to kweiner@fmsware.com.
 *
 * @author Kevin Weiner, FM Software
 * @version 1.03 November 2003
 *
 */

public class AnimatedGifEncoder {

	protected int width; // image size
	protected int height;
	protected Color transparent = null; // transparent color if given
	protected int transIndex; // transparent index in color table
	protected int repeat = -1; // no repeat
	protected int delay = 0; // frame delay (hundredths)
	protected boolean started = false; // ready to output frames
	protected OutputStream out;
	protected BufferedImage image; // current frame
	protected byte[] pixels; // BGR byte array from frame
	protected byte[] indexedPixels; // converted frame indexed to palette
	protected int colorDepth; // number of bit planes
	protected byte[] colorTab; // RGB palette
	protected boolean[] usedEntry = new boolean[256]; // active palette entries
	protected int palSize = 7; // color table size (bits-1)
	protected int dispose = -1; // disposal code (-1 = use default)
	protected boolean closeStream = false; // close stream when finished
	protected boolean firstFrame = true;
	protected boolean sizeSet = false; // if false, get size from first frame
	protected int sample = 10; // default sample interval for quantizer

	/**
	 * Sets the delay time between each frame, or changes it
	 * for subsequent frames (applies to last frame added).
	 *
	 * @param ms int delay time in milliseconds
	 */
	public void setDelay(int ms) {
		delay = Math.round(ms / 10.0f);
	}

	/**
	 * Sets the GIF frame disposal code for the last added frame
	 * and any subsequent frames.  Default is 0 if no transparent
	 * color has been set, otherwise 2.
	 * @param code int disposal code.
	 */
	public void setDispose(int code) {
		if (code >= 0) {
			dispose = code;
		}
	}

	/**
	 * Sets the number of times the set of GIF frames
	 * should be played.  Default is 1; 0 means play
	 * indefinitely.  Must be invoked before the first
	 * image is added.
	 *
	 * @param iter int number of iterations.
	 * @return
	 */
	public void setRepeat(int iter) {
		if (iter >= 0) {
			repeat = iter;
		}
	}

	/**
	 * Sets the transparent color for the last added frame
	 * and any subsequent frames.
	 * Since all colors are subject to modification
	 * in the quantization process, the color in the final
	 * palette for each frame closest to the given color
	 * becomes the transparent color for that frame.
	 * May be set to null to indicate no transparent color.
	 *
	 * @param c Color to be treated as transparent on display.
	 */
	public void setTransparent(Color c) {
		transparent = c;
	}

	/**
	 * Adds next GIF frame.  The frame is not written immediately, but is
	 * actually deferred until the next frame is received so that timing
	 * data can be inserted.  Invoking <code>finish()</code> flushes all
	 * frames.  If <code>setSize</code> was not invoked, the size of the
	 * first image is used for all subsequent frames.
	 *
	 * @param im BufferedImage containing frame to write.
	 * @return true if successful.
	 */
	public boolean addFrame(BufferedImage im) {
		if ((im == null) || !started) {
			return false;
		}
		boolean ok = true;
		try {
			if (!sizeSet) {
				// use first frame's size
				setSize(im.getWidth(), im.getHeight());
			}
			image = im;
			getImagePixels(); // convert to correct format if necessary
			analyzePixels(); // build color table & map pixels
			if (firstFrame) {
				writeLSD(); // logical screen descriptior
				writePalette(); // global color table
				if (repeat >= 0) {
					// use NS app extension to indicate reps
					writeNetscapeExt();
				}
			}
			writeGraphicCtrlExt(); // write graphic control extension
			writeImageDesc(); // image descriptor
			if (!firstFrame) {
				writePalette(); // local color table
			}
			writePixels(); // encode and write pixel data
			firstFrame = false;
		} catch (IOException e) {
			ok = false;
		}

		return ok;
	}

	/**
	 * Flushes any pending data and closes output file.
	 * If writing to an OutputStream, the stream is not
	 * closed.
	 */
	public boolean finish() {
		if (!started) return false;
		boolean ok = true;
		started = false;
		try {
			out.write(0x3b); // gif trailer
			out.flush();
			if (closeStream) {
				out.close();
			}
		} catch (IOException e) {
			ok = false;
		}

		// reset for subsequent use
		transIndex = 0;
		out = null;
		image = null;
		pixels = null;
		indexedPixels = null;
		colorTab = null;
		closeStream = false;
		firstFrame = true;

		return ok;
	}

	/**
	 * Sets frame rate in frames per second.  Equivalent to
	 * <code>setDelay(1000/fps)</code>.
	 *
	 * @param fps float frame rate (frames per second)
	 */
	public void setFrameRate(float fps) {
		if (fps != 0f) {
			delay = Math.round(100f / fps);
		}
	}

	/**
	 * Sets quality of color quantization (conversion of images
	 * to the maximum 256 colors allowed by the GIF specification).
	 * Lower values (minimum = 1) produce better colors, but slow
	 * processing significantly.  10 is the default, and produces
	 * good color mapping at reasonable speeds.  Values greater
	 * than 20 do not yield significant improvements in speed.
	 *
	 * @param quality int greater than 0.
	 * @return
	 */
	public void setQuality(int quality) {
		if (quality < 1) quality = 1;
		sample = quality;
	}

	/**
	 * Sets the GIF frame size.  The default size is the
	 * size of the first frame added if this method is
	 * not invoked.
	 *
	 * @param w int frame width.
	 * @param h int frame width.
	 */
	public void setSize(int w, int h) {
		if (started && !firstFrame) return;
		width = w;
		height = h;
		if (width < 1) width = 320;
		if (height < 1) height = 240;
		sizeSet = true;
	}

	/**
	 * Initiates GIF file creation on the given stream.  The stream
	 * is not closed automatically.
	 *
	 * @param os OutputStream on which GIF images are written.
	 * @return false if initial write failed.
	 */
	public boolean start(OutputStream os) {
		if (os == null) return false;
		boolean ok = true;
		closeStream = false;
		out = os;
		try {
			writeString("GIF89a"); // header
		} catch (IOException e) {
			ok = false;
		}
		return started = ok;
	}

	/**
	 * Initiates writing of a GIF file with the specified name.
	 *
	 * @param file String containing output file name.
	 * @return false if open or initial write failed.
	 */
	public boolean start(String file) {
		boolean ok = true;
		try {
			out = new BufferedOutputStream(new FileOutputStream(file));
			ok = start(out);
			closeStream = true;
		} catch (IOException e) {
			ok = false;
		}
		return started = ok;
	}

	/**
	 * Analyzes image colors and creates color map.
	 */
	protected void analyzePixels() {
		int len = pixels.length;
		int nPix = len / 3;
		indexedPixels = new byte[nPix];
		NeuQuant nq = new NeuQuant(pixels, len, sample);
		// initialize quantizer
		colorTab = nq.process(); // create reduced palette
		// convert map from BGR to RGB
		for (int i = 0; i < colorTab.length; i += 3) {
			byte temp = colorTab[i];
			colorTab[i] = colorTab[i + 2];
			colorTab[i + 2] = temp;
			usedEntry[i / 3] = false;
		}
		// map image pixels to new palette
		int k = 0;
		for (int i = 0; i < nPix; i++) {
			int index =
				nq.map(pixels[k++] & 0xff,
					   pixels[k++] & 0xff,
					   pixels[k++] & 0xff);
			usedEntry[index] = true;
			indexedPixels[i] = (byte) index;
		}
		pixels = null;
		colorDepth = 8;
		palSize = 7;
		// get closest match to transparent color if specified
		if (transparent != null) {
			transIndex = findClosest(transparent);
		}
	}

	/**
	 * Returns index of palette color closest to c
	 *
	 */
	protected int findClosest(Color c) {
		if (colorTab == null) return -1;
		int r = c.getRed();
		int g = c.getGreen();
		int b = c.getBlue();
		int minpos = 0;
		int dmin = 256 * 256 * 256;
		int len = colorTab.length;
		for (int i = 0; i < len;) {
			int dr = r - (colorTab[i++] & 0xff);
			int dg = g - (colorTab[i++] & 0xff);
			int db = b - (colorTab[i] & 0xff);
			int d = dr * dr + dg * dg + db * db;
			int index = i / 3;
			if (usedEntry[index] && (d < dmin)) {
				dmin = d;
				minpos = index;
			}
			i++;
		}
		return minpos;
	}

	/**
	 * Extracts image pixels into byte array "pixels"
	 */
	protected void getImagePixels() {
		int w = image.getWidth();
		int h = image.getHeight();
		int type = image.getType();
		if ((w != width)
			|| (h != height)
			|| (type != BufferedImage.TYPE_3BYTE_BGR)) {
			// create new image with right size/format
			BufferedImage temp =
				new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
			Graphics2D g = temp.createGraphics();
			g.drawImage(image, 0, 0, null);
			image = temp;
		}
		pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
	}

	/**
	 * Writes Graphic Control Extension
	 */
	protected void writeGraphicCtrlExt() throws IOException {
		out.write(0x21); // extension introducer
		out.write(0xf9); // GCE label
		out.write(4); // data block size
		int transp, disp;
		if (transparent == null) {
			transp = 0;
			disp = 0; // dispose = no action
		} else {
			transp = 1;
			disp = 2; // force clear if using transparent color
		}
		if (dispose >= 0) {
			disp = dispose & 7; // user override
		}
		disp <<= 2;

		// packed fields
		out.write(0 | // 1:3 reserved
			   disp | // 4:6 disposal
			      0 | // 7   user input - 0 = none
		     transp); // 8   transparency flag

		writeShort(delay); // delay x 1/100 sec
		out.write(transIndex); // transparent color index
		out.write(0); // block terminator
	}

	/**
	 * Writes Image Descriptor
	 */
	protected void writeImageDesc() throws IOException {
		out.write(0x2c); // image separator
		writeShort(0); // image position x,y = 0,0
		writeShort(0);
		writeShort(width); // image size
		writeShort(height);
		// packed fields
		if (firstFrame) {
			// no LCT  - GCT is used for first (or only) frame
			out.write(0);
		} else {
			// specify normal LCT
			out.write(0x80 | // 1 local color table  1=yes
						 0 | // 2 interlace - 0=no
						 0 | // 3 sorted - 0=no
						 0 | // 4-5 reserved
				   palSize); // 6-8 size of color table
		}
	}

	/**
	 * Writes Logical Screen Descriptor
	 */
	protected void writeLSD() throws IOException {
		// logical screen size
		writeShort(width);
		writeShort(height);
		// packed fields
		out.write((0x80 | // 1   : global color table flag = 1 (gct used)
				   0x70 | // 2-4 : color resolution = 7
				   0x00 | // 5   : gct sort flag = 0
			   palSize)); // 6-8 : gct size

		out.write(0); // background color index
		out.write(0); // pixel aspect ratio - assume 1:1
	}

	/**
	 * Writes Netscape application extension to define
	 * repeat count.
	 */
	protected void writeNetscapeExt() throws IOException {
		out.write(0x21); // extension introducer
		out.write(0xff); // app extension label
		out.write(11); // block size
		writeString("NETSCAPE" + "2.0"); // app id + auth code
		out.write(3); // sub-block size
		out.write(1); // loop sub-block id
		writeShort(repeat); // loop count (extra iterations, 0=repeat forever)
		out.write(0); // block terminator
	}

	/**
	 * Writes color table
	 */
	protected void writePalette() throws IOException {
		out.write(colorTab, 0, colorTab.length);
		int n = (3 * 256) - colorTab.length;
		for (int i = 0; i < n; i++) {
			out.write(0);
		}
	}

	/**
	 * Encodes and writes pixel data
	 */
	protected void writePixels() throws IOException {
		LZWEncoder encoder =
			new LZWEncoder(width, height, indexedPixels, colorDepth);
		encoder.encode(out);
	}

	/**
	 *    Write 16-bit value to output stream, LSB first
	 */
	protected void writeShort(int value) throws IOException {
		out.write(value & 0xff);
		out.write((value >> 8) & 0xff);
	}

	/**
	 * Writes string to output stream
	 */
	protected void writeString(String s) throws IOException {
		for (int i = 0; i < s.length(); i++) {
			out.write((byte) s.charAt(i));
		}
	}
}

之二:GifDecoder.java类
package com.accp.bookshop.util;

import java.net.*;
import java.io.*;
import java.util.*;
import java.awt.*;
import java.awt.image.*;

/**
 * Class GifDecoder - Decodes a GIF file into one or more frames.
 * <br><pre>
 * Example:
 *    GifDecoder d = new GifDecoder();
 *    d.read("sample.gif");
 *    int n = d.getFrameCount();
 *    for (int i = 0; i < n; i++) {
 *       BufferedImage frame = d.getFrame(i);  // frame i
 *       int t = d.getDelay(i);  // display duration of frame in milliseconds
 *       // do something with frame
 *    }
 * </pre>
 * No copyright asserted on the source code of this class.  May be used for
 * any purpose, however, refer to the Unisys LZW patent for any additional
 * restrictions.  Please forward any corrections to kweiner@fmsware.com.
 *
 * @author Kevin Weiner, FM Software; LZW decoder adapted from John Cristy's ImageMagick.
 * @version 1.03 November 2003
 *
 */

public class GifDecoder {

	/**
	 * File read status: No errors.
	 */
	public static final int STATUS_OK = 0;

	/**
	 * File read status: Error decoding file (may be partially decoded)
	 */
	public static final int STATUS_FORMAT_ERROR = 1;

	/**
	 * File read status: Unable to open source.
	 */
	public static final int STATUS_OPEN_ERROR = 2;

	protected BufferedInputStream in;
	protected int status;

	protected int width; // full image width
	protected int height; // full image height
	protected boolean gctFlag; // global color table used
	protected int gctSize; // size of global color table
	protected int loopCount = 1; // iterations; 0 = repeat forever

	protected int[] gct; // global color table
	protected int[] lct; // local color table
	protected int[] act; // active color table

	protected int bgIndex; // background color index
	protected int bgColor; // background color
	protected int lastBgColor; // previous bg color
	protected int pixelAspect; // pixel aspect ratio

	protected boolean lctFlag; // local color table flag
	protected boolean interlace; // interlace flag
	protected int lctSize; // local color table size

	protected int ix, iy, iw, ih; // current image rectangle
	protected Rectangle lastRect; // last image rect
	protected BufferedImage image; // current frame
	protected BufferedImage lastImage; // previous frame

	protected byte[] block = new byte[256]; // current data block
	protected int blockSize = 0; // block size

	// last graphic control extension info
	protected int dispose = 0;
	// 0=no action; 1=leave in place; 2=restore to bg; 3=restore to prev
	protected int lastDispose = 0;
	protected boolean transparency = false; // use transparent color
	protected int delay = 0; // delay in milliseconds
	protected int transIndex; // transparent color index

	protected static final int MaxStackSize = 4096;
	// max decoder pixel stack size

	// LZW decoder working arrays
	protected short[] prefix;
	protected byte[] suffix;
	protected byte[] pixelStack;
	protected byte[] pixels;

	protected ArrayList frames; // frames read from current file
	protected int frameCount;

	static class GifFrame {
		public GifFrame(BufferedImage im, int del) {
			image = im;
			delay = del;
		}
		public BufferedImage image;
		public int delay;
	}

	/**
	 * Gets display duration for specified frame.
	 *
	 * @param n int index of frame
	 * @return delay in milliseconds
	 */
	public int getDelay(int n) {
		//
		delay = -1;
		if ((n >= 0) && (n < frameCount)) {
			delay = ((GifFrame) frames.get(n)).delay;
		}
		return delay;
	}

	/**
	 * Gets the number of frames read from file.
	 * @return frame count
	 */
	public int getFrameCount() {
		return frameCount;
	}

	/**
	 * Gets the first (or only) image read.
	 *
	 * @return BufferedImage containing first frame, or null if none.
	 */
	public BufferedImage getImage() {
		return getFrame(0);
	}

	/**
	 * Gets the "Netscape" iteration count, if any.
	 * A count of 0 means repeat indefinitiely.
	 *
	 * @return iteration count if one was specified, else 1.
	 */
	public int getLoopCount() {
		return loopCount;
	}

	/**
	 * Creates new frame image from current data (and previous
	 * frames as specified by their disposition codes).
	 */
	protected void setPixels() {
		// expose destination image's pixels as int array
		int[] dest =
			((DataBufferInt) image.getRaster().getDataBuffer()).getData();

		// fill in starting image contents based on last image's dispose code
		if (lastDispose > 0) {
			if (lastDispose == 3) {
				// use image before last
				int n = frameCount - 2;
				if (n > 0) {
					lastImage = getFrame(n - 1);
				} else {
					lastImage = null;
				}
			}

			if (lastImage != null) {
				int[] prev =
					((DataBufferInt) lastImage.getRaster().getDataBuffer()).getData();
				System.arraycopy(prev, 0, dest, 0, width * height);
				// copy pixels

				if (lastDispose == 2) {
					// fill last image rect area with background color
					Graphics2D g = image.createGraphics();
					Color c = null;
					if (transparency) {
						c = new Color(0, 0, 0, 0); 	// assume background is transparent
					} else {
						c = new Color(lastBgColor); // use given background color
					}
					g.setColor(c);
					g.setComposite(AlphaComposite.Src); // replace area
					g.fill(lastRect);
					g.dispose();
				}
			}
		}

		// copy each source line to the appropriate place in the destination
		int pass = 1;
		int inc = 8;
		int iline = 0;
		for (int i = 0; i < ih; i++) {
			int line = i;
			if (interlace) {
				if (iline >= ih) {
					pass++;
					switch (pass) {
						case 2 :
							iline = 4;
							break;
						case 3 :
							iline = 2;
							inc = 4;
							break;
						case 4 :
							iline = 1;
							inc = 2;
					}
				}
				line = iline;
				iline += inc;
			}
			line += iy;
			if (line < height) {
				int k = line * width;
				int dx = k + ix; // start of line in dest
				int dlim = dx + iw; // end of dest line
				if ((k + width) < dlim) {
					dlim = k + width; // past dest edge
				}
				int sx = i * iw; // start of line in source
				while (dx < dlim) {
					// map color and insert in destination
					int index = ((int) pixels[sx++]) & 0xff;
					int c = act[index];
					if (c != 0) {
						dest[dx] = c;
					}
					dx++;
				}
			}
		}
	}

	/**
	 * Gets the image contents of frame n.
	 *
	 * @return BufferedImage representation of frame, or null if n is invalid.
	 */
	public BufferedImage getFrame(int n) {
		BufferedImage im = null;
		if ((n >= 0) && (n < frameCount)) {
			im = ((GifFrame) frames.get(n)).image;
		}
		return im;
	}

	/**
	 * Gets image size.
	 *
	 * @return GIF image dimensions
	 */
	public Dimension getFrameSize() {
		return new Dimension(width, height);
	}

	/**
	 * Reads GIF image from stream
	 *
	 * @param BufferedInputStream containing GIF file.
	 * @return read status code (0 = no errors)
	 */
	public int read(BufferedInputStream is) {
		init();
		if (is != null) {
			in = is;
			readHeader();
			if (!err()) {
				readContents();
				if (frameCount < 0) {
					status = STATUS_FORMAT_ERROR;
				}
			}
		} else {
			status = STATUS_OPEN_ERROR;
		}
		try {
			is.close();
		} catch (IOException e) {
		}
		return status;
	}

	/**
	 * Reads GIF image from stream
	 *
	 * @param InputStream containing GIF file.
	 * @return read status code (0 = no errors)
	 */
	public int read(InputStream is) {
		init();
		if (is != null) {
			if (!(is instanceof BufferedInputStream))
				is = new BufferedInputStream(is);
			in = (BufferedInputStream) is;
			readHeader();
			if (!err()) {
				readContents();
				if (frameCount < 0) {
					status = STATUS_FORMAT_ERROR;
				}
			}
		} else {
			status = STATUS_OPEN_ERROR;
		}
		try {
			is.close();
		} catch (IOException e) {
		}
		return status;
	}

	/**
	 * Reads GIF file from specified file/URL source
	 * (URL assumed if name contains ":/" or "file:")
	 *
	 * @param name String containing source
	 * @return read status code (0 = no errors)
	 */
	public int read(String name) {
		status = STATUS_OK;
		try {
			name = name.trim().toLowerCase();
			if ((name.indexOf("file:") >= 0) ||
				(name.indexOf(":/") > 0)) {
				URL url = new URL(name);
				in = new BufferedInputStream(url.openStream());
			} else {
				in = new BufferedInputStream(new FileInputStream(name));
			}
			status = read(in);
		} catch (IOException e) {
			status = STATUS_OPEN_ERROR;
		}

		return status;
	}

	/**
	 * Decodes LZW image data into pixel array.
	 * Adapted from John Cristy's ImageMagick.
	 */
	protected void decodeImageData() {
		int NullCode = -1;
		int npix = iw * ih;
		int available,
			clear,
			code_mask,
			code_size,
			end_of_information,
			in_code,
			old_code,
			bits,
			code,
			count,
			i,
			datum,
			data_size,
			first,
			top,
			bi,
			pi;

		if ((pixels == null) || (pixels.length < npix)) {
			pixels = new byte[npix]; // allocate new pixel array
		}
		if (prefix == null) prefix = new short[MaxStackSize];
		if (suffix == null) suffix = new byte[MaxStackSize];
		if (pixelStack == null) pixelStack = new byte[MaxStackSize + 1];

		//  Initialize GIF data stream decoder.

		data_size = read();
		clear = 1 << data_size;
		end_of_information = clear + 1;
		available = clear + 2;
		old_code = NullCode;
		code_size = data_size + 1;
		code_mask = (1 << code_size) - 1;
		for (code = 0; code < clear; code++) {
			prefix[code] = 0;
			suffix[code] = (byte) code;
		}

		//  Decode GIF pixel stream.

		datum = bits = count = first = top = pi = bi = 0;

		for (i = 0; i < npix;) {
			if (top == 0) {
				if (bits < code_size) {
					//  Load bytes until there are enough bits for a code.
					if (count == 0) {
						// Read a new data block.
						count = readBlock();
						if (count <= 0)
							break;
						bi = 0;
					}
					datum += (((int) block[bi]) & 0xff) << bits;
					bits += 8;
					bi++;
					count--;
					continue;
				}

				//  Get the next code.

				code = datum & code_mask;
				datum >>= code_size;
				bits -= code_size;

				//  Interpret the code

				if ((code > available) || (code == end_of_information))
					break;
				if (code == clear) {
					//  Reset decoder.
					code_size = data_size + 1;
					code_mask = (1 << code_size) - 1;
					available = clear + 2;
					old_code = NullCode;
					continue;
				}
				if (old_code == NullCode) {
					pixelStack[top++] = suffix[code];
					old_code = code;
					first = code;
					continue;
				}
				in_code = code;
				if (code == available) {
					pixelStack[top++] = (byte) first;
					code = old_code;
				}
				while (code > clear) {
					pixelStack[top++] = suffix[code];
					code = prefix[code];
				}
				first = ((int) suffix[code]) & 0xff;

				//  Add a new string to the string table,

				if (available >= MaxStackSize)
					break;
				pixelStack[top++] = (byte) first;
				prefix[available] = (short) old_code;
				suffix[available] = (byte) first;
				available++;
				if (((available & code_mask) == 0)
					&& (available < MaxStackSize)) {
					code_size++;
					code_mask += available;
				}
				old_code = in_code;
			}

			//  Pop a pixel off the pixel stack.

			top--;
			pixels[pi++] = pixelStack[top];
			i++;
		}

		for (i = pi; i < npix; i++) {
			pixels[i] = 0; // clear missing pixels
		}

	}

	/**
	 * Returns true if an error was encountered during reading/decoding
	 */
	protected boolean err() {
		return status != STATUS_OK;
	}

	/**
	 * Initializes or re-initializes reader
	 */
	protected void init() {
		status = STATUS_OK;
		frameCount = 0;
		frames = new ArrayList();
		gct = null;
		lct = null;
	}

	/**
	 * Reads a single byte from the input stream.
	 */
	protected int read() {
		int curByte = 0;
		try {
			curByte = in.read();
		} catch (IOException e) {
			status = STATUS_FORMAT_ERROR;
		}
		return curByte;
	}

	/**
	 * Reads next variable length block from input.
	 *
	 * @return number of bytes stored in "buffer"
	 */
	protected int readBlock() {
		blockSize = read();
		int n = 0;
		if (blockSize > 0) {
			try {
				int count = 0;
				while (n < blockSize) {
					count = in.read(block, n, blockSize - n);
					if (count == -1)
						break;
					n += count;
				}
			} catch (IOException e) {
			}

			if (n < blockSize) {
				status = STATUS_FORMAT_ERROR;
			}
		}
		return n;
	}

	/**
	 * Reads color table as 256 RGB integer values
	 *
	 * @param ncolors int number of colors to read
	 * @return int array containing 256 colors (packed ARGB with full alpha)
	 */
	protected int[] readColorTable(int ncolors) {
		int nbytes = 3 * ncolors;
		int[] tab = null;
		byte[] c = new byte[nbytes];
		int n = 0;
		try {
			n = in.read(c);
		} catch (IOException e) {
		}
		if (n < nbytes) {
			status = STATUS_FORMAT_ERROR;
		} else {
			tab = new int[256]; // max size to avoid bounds checks
			int i = 0;
			int j = 0;
			while (i < ncolors) {
				int r = ((int) c[j++]) & 0xff;
				int g = ((int) c[j++]) & 0xff;
				int b = ((int) c[j++]) & 0xff;
				tab[i++] = 0xff000000 | (r << 16) | (g << 8) | b;
			}
		}
		return tab;
	}

	/**
	 * Main file parser.  Reads GIF content blocks.
	 */
	protected void readContents() {
		// read GIF file content blocks
		boolean done = false;
		while (!(done || err())) {
			int code = read();
			switch (code) {

				case 0x2C : // image separator
					readImage();
					break;

				case 0x21 : // extension
					code = read();
					switch (code) {
						case 0xf9 : // graphics control extension
							readGraphicControlExt();
							break;

						case 0xff : // application extension
							readBlock();
							String app = "";
							for (int i = 0; i < 11; i++) {
								app += (char) block[i];
							}
							if (app.equals("NETSCAPE2.0")) {
								readNetscapeExt();
							}
							else
								skip(); // don't care
							break;

						default : // uninteresting extension
							skip();
					}
					break;

				case 0x3b : // terminator
					done = true;
					break;

				case 0x00 : // bad byte, but keep going and see what happens
					break;

				default :
					status = STATUS_FORMAT_ERROR;
			}
		}
	}

	/**
	 * Reads Graphics Control Extension values
	 */
	protected void readGraphicControlExt() {
		read(); // block size
		int packed = read(); // packed fields
		dispose = (packed & 0x1c) >> 2; // disposal method
		if (dispose == 0) {
			dispose = 1; // elect to keep old image if discretionary
		}
		transparency = (packed & 1) != 0;
		delay = readShort() * 10; // delay in milliseconds
		transIndex = read(); // transparent color index
		read(); // block terminator
	}

	/**
	 * Reads GIF file header information.
	 */
	protected void readHeader() {
		String id = "";
		for (int i = 0; i < 6; i++) {
			id += (char) read();
		}
		if (!id.startsWith("GIF")) {
			status = STATUS_FORMAT_ERROR;
			return;
		}

		readLSD();
		if (gctFlag && !err()) {
			gct = readColorTable(gctSize);
			bgColor = gct[bgIndex];
		}
	}

	/**
	 * Reads next frame image
	 */
	protected void readImage() {
		ix = readShort(); // (sub)image position & size
		iy = readShort();
		iw = readShort();
		ih = readShort();

		int packed = read();
		lctFlag = (packed & 0x80) != 0; // 1 - local color table flag
		interlace = (packed & 0x40) != 0; // 2 - interlace flag
		// 3 - sort flag
		// 4-5 - reserved
		lctSize = 2 << (packed & 7); // 6-8 - local color table size

		if (lctFlag) {
			lct = readColorTable(lctSize); // read table
			act = lct; // make local table active
		} else {
			act = gct; // make global table active
			if (bgIndex == transIndex)
				bgColor = 0;
		}
		int save = 0;
		if (transparency) {
			save = act[transIndex];
			act[transIndex] = 0; // set transparent color if specified
		}

		if (act == null) {
			status = STATUS_FORMAT_ERROR; // no color table defined
		}

		if (err()) return;

		decodeImageData(); // decode pixel data
		skip();

		if (err()) return;

		frameCount++;

		// create new image to receive frame data
		image =
			new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB_PRE);

		setPixels(); // transfer pixel data to image

		frames.add(new GifFrame(image, delay)); // add image to frame list

		if (transparency) {
			act[transIndex] = save;
		}
		resetFrame();

	}

	/**
	 * Reads Logical Screen Descriptor
	 */
	protected void readLSD() {

		// logical screen size
		width = readShort();
		height = readShort();

		// packed fields
		int packed = read();
		gctFlag = (packed & 0x80) != 0; // 1   : global color table flag
		// 2-4 : color resolution
		// 5   : gct sort flag
		gctSize = 2 << (packed & 7); // 6-8 : gct size

		bgIndex = read(); // background color index
		pixelAspect = read(); // pixel aspect ratio
	}

	/**
	 * Reads Netscape extenstion to obtain iteration count
	 */
	protected void readNetscapeExt() {
		do {
			readBlock();
			if (block[0] == 1) {
				// loop count sub-block
				int b1 = ((int) block[1]) & 0xff;
				int b2 = ((int) block[2]) & 0xff;
				loopCount = (b2 << 8) | b1;
			}
		} while ((blockSize > 0) && !err());
	}

	/**
	 * Reads next 16-bit value, LSB first
	 */
	protected int readShort() {
		// read 16-bit value, LSB first
		return read() | (read() << 8);
	}

	/**
	 * Resets frame state for reading next image.
	 */
	protected void resetFrame() {
		lastDispose = dispose;
		lastRect = new Rectangle(ix, iy, iw, ih);
		lastImage = image;
		lastBgColor = bgColor;
		int dispose = 0;
		boolean transparency = false;
		int delay = 0;
		lct = null;
	}

	/**
	 * Skips variable length blocks up to and including
	 * next zero length block.
	 */
	protected void skip() {
		do {
			readBlock();
		} while ((blockSize > 0) && !err());
	}
}

之三:LZWEncoder.java类
package com.accp.bookshop.util;

import java.io.OutputStream;
import java.io.IOException;

//==============================================================================
//  Adapted from Jef Poskanzer's Java port by way of J. M. G. Elliott.
//  K Weiner 12/00

class LZWEncoder {

	private static final int EOF = -1;

	private int imgW, imgH;
	private byte[] pixAry;
	private int initCodeSize;
	private int remaining;
	private int curPixel;

	// GIFCOMPR.C       - GIF Image compression routines
	//
	// Lempel-Ziv compression based on 'compress'.  GIF modifications by
	// David Rowley (mgardi@watdcsu.waterloo.edu)

	// General DEFINEs

	static final int BITS = 12;

	static final int HSIZE = 5003; // 80% occupancy

	// GIF Image compression - modified 'compress'
	//
	// Based on: compress.c - File compression ala IEEE Computer, June 1984.
	//
	// By Authors:  Spencer W. Thomas      (decvax!harpo!utah-cs!utah-gr!thomas)
	//              Jim McKie              (decvax!mcvax!jim)
	//              Steve Davies           (decvax!vax135!petsd!peora!srd)
	//              Ken Turkowski          (decvax!decwrl!turtlevax!ken)
	//              James A. Woods         (decvax!ihnp4!ames!jaw)
	//              Joe Orost              (decvax!vax135!petsd!joe)

	int n_bits; // number of bits/code
	int maxbits = BITS; // user settable max # bits/code
	int maxcode; // maximum code, given n_bits
	int maxmaxcode = 1 << BITS; // should NEVER generate this code

	int[] htab = new int[HSIZE];
	int[] codetab = new int[HSIZE];

	int hsize = HSIZE; // for dynamic table sizing

	int free_ent = 0; // first unused entry

	// block compression parameters -- after all codes are used up,
	// and compression rate changes, start over.
	boolean clear_flg = false;

	// Algorithm:  use open addressing double hashing (no chaining) on the
	// prefix code / next character combination.  We do a variant of Knuth's
	// algorithm D (vol. 3, sec. 6.4) along with G. Knott's relatively-prime
	// secondary probe.  Here, the modular division first probe is gives way
	// to a faster exclusive-or manipulation.  Also do block compression with
	// an adaptive reset, whereby the code table is cleared when the compression
	// ratio decreases, but after the table fills.  The variable-length output
	// codes are re-sized at this point, and a special CLEAR code is generated
	// for the decompressor.  Late addition:  construct the table according to
	// file size for noticeable speed improvement on small files.  Please direct
	// questions about this implementation to ames!jaw.

	int g_init_bits;

	int ClearCode;
	int EOFCode;

	// output
	//
	// Output the given code.
	// Inputs:
	//      code:   A n_bits-bit integer.  If == -1, then EOF.  This assumes
	//              that n_bits =< wordsize - 1.
	// Outputs:
	//      Outputs code to the file.
	// Assumptions:
	//      Chars are 8 bits long.
	// Algorithm:
	//      Maintain a BITS character long buffer (so that 8 codes will
	// fit in it exactly).  Use the VAX insv instruction to insert each
	// code in turn.  When the buffer fills up empty it and start over.

	int cur_accum = 0;
	int cur_bits = 0;

	int masks[] =
		{
			0x0000,
			0x0001,
			0x0003,
			0x0007,
			0x000F,
			0x001F,
			0x003F,
			0x007F,
			0x00FF,
			0x01FF,
			0x03FF,
			0x07FF,
			0x0FFF,
			0x1FFF,
			0x3FFF,
			0x7FFF,
			0xFFFF };

	// Number of characters so far in this 'packet'
	int a_count;

	// Define the storage for the packet accumulator
	byte[] accum = new byte[256];

	//----------------------------------------------------------------------------
	LZWEncoder(int width, int height, byte[] pixels, int color_depth) {
		imgW = width;
		imgH = height;
		pixAry = pixels;
		initCodeSize = Math.max(2, color_depth);
	}

	// Add a character to the end of the current packet, and if it is 254
	// characters, flush the packet to disk.
	void char_out(byte c, OutputStream outs) throws IOException {
		accum[a_count++] = c;
		if (a_count >= 254)
			flush_char(outs);
	}

	// Clear out the hash table

	// table clear for block compress
	void cl_block(OutputStream outs) throws IOException {
		cl_hash(hsize);
		free_ent = ClearCode + 2;
		clear_flg = true;

		output(ClearCode, outs);
	}

	// reset code table
	void cl_hash(int hsize) {
		for (int i = 0; i < hsize; ++i)
			htab[i] = -1;
	}

	void compress(int init_bits, OutputStream outs) throws IOException {
		int fcode;
		int i /* = 0 */;
		int c;
		int ent;
		int disp;
		int hsize_reg;
		int hshift;

		// Set up the globals:  g_init_bits - initial number of bits
		g_init_bits = init_bits;

		// Set up the necessary values
		clear_flg = false;
		n_bits = g_init_bits;
		maxcode = MAXCODE(n_bits);

		ClearCode = 1 << (init_bits - 1);
		EOFCode = ClearCode + 1;
		free_ent = ClearCode + 2;

		a_count = 0; // clear packet

		ent = nextPixel();

		hshift = 0;
		for (fcode = hsize; fcode < 65536; fcode *= 2)
			++hshift;
		hshift = 8 - hshift; // set hash code range bound

		hsize_reg = hsize;
		cl_hash(hsize_reg); // clear hash table

		output(ClearCode, outs);

		outer_loop : while ((c = nextPixel()) != EOF) {
			fcode = (c << maxbits) + ent;
			i = (c << hshift) ^ ent; // xor hashing

			if (htab[i] == fcode) {
				ent = codetab[i];
				continue;
			} else if (htab[i] >= 0) // non-empty slot
				{
				disp = hsize_reg - i; // secondary hash (after G. Knott)
				if (i == 0)
					disp = 1;
				do {
					if ((i -= disp) < 0)
						i += hsize_reg;

					if (htab[i] == fcode) {
						ent = codetab[i];
						continue outer_loop;
					}
				} while (htab[i] >= 0);
			}
			output(ent, outs);
			ent = c;
			if (free_ent < maxmaxcode) {
				codetab[i] = free_ent++; // code -> hashtable
				htab[i] = fcode;
			} else
				cl_block(outs);
		}
		// Put out the final code.
		output(ent, outs);
		output(EOFCode, outs);
	}

	//----------------------------------------------------------------------------
	void encode(OutputStream os) throws IOException {
		os.write(initCodeSize); // write "initial code size" byte

		remaining = imgW * imgH; // reset navigation variables
		curPixel = 0;

		compress(initCodeSize + 1, os); // compress and write the pixel data

		os.write(0); // write block terminator
	}

	// Flush the packet to disk, and reset the accumulator
	void flush_char(OutputStream outs) throws IOException {
		if (a_count > 0) {
			outs.write(a_count);
			outs.write(accum, 0, a_count);
			a_count = 0;
		}
	}

	final int MAXCODE(int n_bits) {
		return (1 << n_bits) - 1;
	}

	//----------------------------------------------------------------------------
	// Return the next pixel from the image
	//----------------------------------------------------------------------------
	private int nextPixel() {
		if (remaining == 0)
			return EOF;

		--remaining;

		byte pix = pixAry[curPixel++];

		return pix & 0xff;
	}

	void output(int code, OutputStream outs) throws IOException {
		cur_accum &= masks[cur_bits];

		if (cur_bits > 0)
			cur_accum |= (code << cur_bits);
		else
			cur_accum = code;

		cur_bits += n_bits;

		while (cur_bits >= 8) {
			char_out((byte) (cur_accum & 0xff), outs);
			cur_accum >>= 8;
			cur_bits -= 8;
		}

		// If the next entry is going to be too big for the code size,
		// then increase it, if possible.
		if (free_ent > maxcode || clear_flg) {
			if (clear_flg) {
				maxcode = MAXCODE(n_bits = g_init_bits);
				clear_flg = false;
			} else {
				++n_bits;
				if (n_bits == maxbits)
					maxcode = maxmaxcode;
				else
					maxcode = MAXCODE(n_bits);
			}
		}

		if (code == EOFCode) {
			// At EOF, write the rest of the buffer.
			while (cur_bits > 0) {
				char_out((byte) (cur_accum & 0xff), outs);
				cur_accum >>= 8;
				cur_bits -= 8;
			}

			flush_char(outs);
		}
	}
}

之四:NeuQuant.java类
package com.accp.bookshop.util;

/* NeuQuant Neural-Net Quantization Algorithm
 * ------------------------------------------
 *
 * Copyright (c) 1994 Anthony Dekker
 *
 * NEUQUANT Neural-Net quantization algorithm by Anthony Dekker, 1994.
 * See "Kohonen neural networks for optimal colour quantization"
 * in "Network: Computation in Neural Systems" Vol. 5 (1994) pp 351-367.
 * for a discussion of the algorithm.
 *
 * Any party obtaining a copy of these files from the author, directly or
 * indirectly, is granted, free of charge, a full and unrestricted irrevocable,
 * world-wide, paid up, royalty-free, nonexclusive right and license to deal
 * in this software and documentation files (the "Software"), including without
 * limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons who receive
 * copies from any such party to do so, with the only requirement being
 * that this copyright notice remain intact.
 */

// Ported to Java 12/00 K Weiner

public class NeuQuant {

	protected static final int netsize = 256; /* number of colours used */

	/* four primes near 500 - assume no image has a length so large */
	/* that it is divisible by all four primes */
	protected static final int prime1 = 499;
	protected static final int prime2 = 491;
	protected static final int prime3 = 487;
	protected static final int prime4 = 503;

	protected static final int minpicturebytes = (3 * prime4);
	/* minimum size for input image */

	/* Program Skeleton
	   ----------------
	   [select samplefac in range 1..30]
	   [read image from input file]
	   pic = (unsigned char*) malloc(3*width*height);
	   initnet(pic,3*width*height,samplefac);
	   learn();
	   unbiasnet();
	   [write output image header, using writecolourmap(f)]
	   inxbuild();
	   write output image using inxsearch(b,g,r)      */

	/* Network Definitions
	   ------------------- */

	protected static final int maxnetpos = (netsize - 1);
	protected static final int netbiasshift = 4; /* bias for colour values */
	protected static final int ncycles = 100; /* no. of learning cycles */

	/* defs for freq and bias */
	protected static final int intbiasshift = 16; /* bias for fractions */
	protected static final int intbias = (((int) 1) << intbiasshift);
	protected static final int gammashift = 10; /* gamma = 1024 */
	protected static final int gamma = (((int) 1) << gammashift);
	protected static final int betashift = 10;
	protected static final int beta = (intbias >> betashift); /* beta = 1/1024 */
	protected static final int betagamma =
		(intbias << (gammashift - betashift));

	/* defs for decreasing radius factor */
	protected static final int initrad = (netsize >> 3); /* for 256 cols, radius starts */
	protected static final int radiusbiasshift = 6; /* at 32.0 biased by 6 bits */
	protected static final int radiusbias = (((int) 1) << radiusbiasshift);
	protected static final int initradius = (initrad * radiusbias); /* and decreases by a */
	protected static final int radiusdec = 30; /* factor of 1/30 each cycle */

	/* defs for decreasing alpha factor */
	protected static final int alphabiasshift = 10; /* alpha starts at 1.0 */
	protected static final int initalpha = (((int) 1) << alphabiasshift);

	protected int alphadec; /* biased by 10 bits */

	/* radbias and alpharadbias used for radpower calculation */
	protected static final int radbiasshift = 8;
	protected static final int radbias = (((int) 1) << radbiasshift);
	protected static final int alpharadbshift = (alphabiasshift + radbiasshift);
	protected static final int alpharadbias = (((int) 1) << alpharadbshift);

	/* Types and Global Variables
	-------------------------- */

	protected byte[] thepicture; /* the input image itself */
	protected int lengthcount; /* lengthcount = H*W*3 */

	protected int samplefac; /* sampling factor 1..30 */

	//   typedef int pixel[4];                /* BGRc */
	protected int[][] network; /* the network itself - [netsize][4] */

	protected int[] netindex = new int[256];
	/* for network lookup - really 256 */

	protected int[] bias = new int[netsize];
	/* bias and freq arrays for learning */
	protected int[] freq = new int[netsize];
	protected int[] radpower = new int[initrad];
	/* radpower for precomputation */

	/* Initialise network in range (0,0,0) to (255,255,255) and set parameters
	   ----------------------------------------------------------------------- */
	public NeuQuant(byte[] thepic, int len, int sample) {

		int i;
		int[] p;

		thepicture = thepic;
		lengthcount = len;
		samplefac = sample;

		network = new int[netsize][];
		for (i = 0; i < netsize; i++) {
			network[i] = new int[4];
			p = network[i];
			p[0] = p[1] = p[2] = (i << (netbiasshift + 8)) / netsize;
			freq[i] = intbias / netsize; /* 1/netsize */
			bias[i] = 0;
		}
	}

	public byte[] colorMap() {
		byte[] map = new byte[3 * netsize];
		int[] index = new int[netsize];
		for (int i = 0; i < netsize; i++)
			index[network[i][3]] = i;
		int k = 0;
		for (int i = 0; i < netsize; i++) {
			int j = index[i];
			map[k++] = (byte) (network[j][0]);
			map[k++] = (byte) (network[j][1]);
			map[k++] = (byte) (network[j][2]);
		}
		return map;
	}

	/* Insertion sort of network and building of netindex[0..255] (to do after unbias)
	   ------------------------------------------------------------------------------- */
	public void inxbuild() {

		int i, j, smallpos, smallval;
		int[] p;
		int[] q;
		int previouscol, startpos;

		previouscol = 0;
		startpos = 0;
		for (i = 0; i < netsize; i++) {
			p = network[i];
			smallpos = i;
			smallval = p[1]; /* index on g */
			/* find smallest in i..netsize-1 */
			for (j = i + 1; j < netsize; j++) {
				q = network[j];
				if (q[1] < smallval) { /* index on g */
					smallpos = j;
					smallval = q[1]; /* index on g */
				}
			}
			q = network[smallpos];
			/* swap p (i) and q (smallpos) entries */
			if (i != smallpos) {
				j = q[0];
				q[0] = p[0];
				p[0] = j;
				j = q[1];
				q[1] = p[1];
				p[1] = j;
				j = q[2];
				q[2] = p[2];
				p[2] = j;
				j = q[3];
				q[3] = p[3];
				p[3] = j;
			}
			/* smallval entry is now in position i */
			if (smallval != previouscol) {
				netindex[previouscol] = (startpos + i) >> 1;
				for (j = previouscol + 1; j < smallval; j++)
					netindex[j] = i;
				previouscol = smallval;
				startpos = i;
			}
		}
		netindex[previouscol] = (startpos + maxnetpos) >> 1;
		for (j = previouscol + 1; j < 256; j++)
			netindex[j] = maxnetpos; /* really 256 */
	}

	/* Main Learning Loop
	   ------------------ */
	public void learn() {

		int i, j, b, g, r;
		int radius, rad, alpha, step, delta, samplepixels;
		byte[] p;
		int pix, lim;

		if (lengthcount < minpicturebytes)
			samplefac = 1;
		alphadec = 30 + ((samplefac - 1) / 3);
		p = thepicture;
		pix = 0;
		lim = lengthcount;
		samplepixels = lengthcount / (3 * samplefac);
		delta = samplepixels / ncycles;
		alpha = initalpha;
		radius = initradius;

		rad = radius >> radiusbiasshift;
		if (rad <= 1)
			rad = 0;
		for (i = 0; i < rad; i++)
			radpower[i] =
				alpha * (((rad * rad - i * i) * radbias) / (rad * rad));

		//fprintf(stderr,"beginning 1D learning: initial radius=%d/n", rad);

		if (lengthcount < minpicturebytes)
			step = 3;
		else if ((lengthcount % prime1) != 0)
			step = 3 * prime1;
		else {
			if ((lengthcount % prime2) != 0)
				step = 3 * prime2;
			else {
				if ((lengthcount % prime3) != 0)
					step = 3 * prime3;
				else
					step = 3 * prime4;
			}
		}

		i = 0;
		while (i < samplepixels) {
			b = (p[pix + 0] & 0xff) << netbiasshift;
			g = (p[pix + 1] & 0xff) << netbiasshift;
			r = (p[pix + 2] & 0xff) << netbiasshift;
			j = contest(b, g, r);

			altersingle(alpha, j, b, g, r);
			if (rad != 0)
				alterneigh(rad, j, b, g, r); /* alter neighbours */

			pix += step;
			if (pix >= lim)
				pix -= lengthcount;

			i++;
			if (delta == 0)
				delta = 1;
			if (i % delta == 0) {
				alpha -= alpha / alphadec;
				radius -= radius / radiusdec;
				rad = radius >> radiusbiasshift;
				if (rad <= 1)
					rad = 0;
				for (j = 0; j < rad; j++)
					radpower[j] =
						alpha * (((rad * rad - j * j) * radbias) / (rad * rad));
			}
		}
		//fprintf(stderr,"finished 1D learning: final alpha=%f !/n",((float)alpha)/initalpha);
	}

	/* Search for BGR values 0..255 (after net is unbiased) and return colour index
	   ---------------------------------------------------------------------------- */
	public int map(int b, int g, int r) {

		int i, j, dist, a, bestd;
		int[] p;
		int best;

		bestd = 1000; /* biggest possible dist is 256*3 */
		best = -1;
		i = netindex[g]; /* index on g */
		j = i - 1; /* start at netindex[g] and work outwards */

		while ((i < netsize) || (j >= 0)) {
			if (i < netsize) {
				p = network[i];
				dist = p[1] - g; /* inx key */
				if (dist >= bestd)
					i = netsize; /* stop iter */
				else {
					i++;
					if (dist < 0)
						dist = -dist;
					a = p[0] - b;
					if (a < 0)
						a = -a;
					dist += a;
					if (dist < bestd) {
						a = p[2] - r;
						if (a < 0)
							a = -a;
						dist += a;
						if (dist < bestd) {
							bestd = dist;
							best = p[3];
						}
					}
				}
			}
			if (j >= 0) {
				p = network[j];
				dist = g - p[1]; /* inx key - reverse dif */
				if (dist >= bestd)
					j = -1; /* stop iter */
				else {
					j--;
					if (dist < 0)
						dist = -dist;
					a = p[0] - b;
					if (a < 0)
						a = -a;
					dist += a;
					if (dist < bestd) {
						a = p[2] - r;
						if (a < 0)
							a = -a;
						dist += a;
						if (dist < bestd) {
							bestd = dist;
							best = p[3];
						}
					}
				}
			}
		}
		return (best);
	}
	public byte[] process() {
		learn();
		unbiasnet();
		inxbuild();
		return colorMap();
	}

	/* Unbias network to give byte values 0..255 and record position i to prepare for sort
	   ----------------------------------------------------------------------------------- */
	public void unbiasnet() {

		int i, j;

		for (i = 0; i < netsize; i++) {
			network[i][0] >>= netbiasshift;
			network[i][1] >>= netbiasshift;
			network[i][2] >>= netbiasshift;
			network[i][3] = i; /* record colour no */
		}
	}

	/* Move adjacent neurons by precomputed alpha*(1-((i-j)^2/[r]^2)) in radpower[|i-j|]
	   --------------------------------------------------------------------------------- */
	protected void alterneigh(int rad, int i, int b, int g, int r) {

		int j, k, lo, hi, a, m;
		int[] p;

		lo = i - rad;
		if (lo < -1)
			lo = -1;
		hi = i + rad;
		if (hi > netsize)
			hi = netsize;

		j = i + 1;
		k = i - 1;
		m = 1;
		while ((j < hi) || (k > lo)) {
			a = radpower[m++];
			if (j < hi) {
				p = network[j++];
				try {
					p[0] -= (a * (p[0] - b)) / alpharadbias;
					p[1] -= (a * (p[1] - g)) / alpharadbias;
					p[2] -= (a * (p[2] - r)) / alpharadbias;
				} catch (Exception e) {
				} // prevents 1.3 miscompilation
			}
			if (k > lo) {
				p = network[k--];
				try {
					p[0] -= (a * (p[0] - b)) / alpharadbias;
					p[1] -= (a * (p[1] - g)) / alpharadbias;
					p[2] -= (a * (p[2] - r)) / alpharadbias;
				} catch (Exception e) {
				}
			}
		}
	}

	/* Move neuron i towards biased (b,g,r) by factor alpha
	   ---------------------------------------------------- */
	protected void altersingle(int alpha, int i, int b, int g, int r) {

		/* alter hit neuron */
		int[] n = network[i];
		n[0] -= (alpha * (n[0] - b)) / initalpha;
		n[1] -= (alpha * (n[1] - g)) / initalpha;
		n[2] -= (alpha * (n[2] - r)) / initalpha;
	}

	/* Search for biased BGR values
	   ---------------------------- */
	protected int contest(int b, int g, int r) {

		/* finds closest neuron (min dist) and updates freq */
		/* finds best neuron (min dist-bias) and returns position */
		/* for frequently chosen neurons, freq[i] is high and bias[i] is negative */
		/* bias[i] = gamma*((1/netsize)-freq[i]) */

		int i, dist, a, biasdist, betafreq;
		int bestpos, bestbiaspos, bestd, bestbiasd;
		int[] n;

		bestd = ~(((int) 1) << 31);
		bestbiasd = bestd;
		bestpos = -1;
		bestbiaspos = bestpos;

		for (i = 0; i < netsize; i++) {
			n = network[i];
			dist = n[0] - b;
			if (dist < 0)
				dist = -dist;
			a = n[1] - g;
			if (a < 0)
				a = -a;
			dist += a;
			a = n[2] - r;
			if (a < 0)
				a = -a;
			dist += a;
			if (dist < bestd) {
				bestd = dist;
				bestpos = i;
			}
			biasdist = dist - ((bias[i]) >> (intbiasshift - netbiasshift));
			if (biasdist < bestbiasd) {
				bestbiasd = biasdist;
				bestbiaspos = i;
			}
			betafreq = (freq[i] >> betashift);
			freq[i] -= betafreq;
			bias[i] += (betafreq << gammashift);
		}
		freq[bestpos] += beta;
		bias[bestpos] -= betagamma;
		return (bestbiaspos);
	}
}
js逻辑运算
var a = 2;

var b = 3;

var andflag = a && b ;

var orflag = a || b;

问andflag 和orflag 分别是什么?

起初我认为: andflag 和orflag 的值都为 true; 毕竟 && 和 || 都是求Boolean ,后来发现,我错了。

答案应该是  andflag  = 3,orflag = 2;

原因是这样的:

在运算过程中,首先js  会将 && 和||  两边的值转成Boolean 类型,然后再算值 ,&&运算如果返回true,则取后面的值,如果|| 返回true,则取前面的值 , 而其中数值转换成boolean 的规则 是:

对象、非零整数、非空字符串返回true,

其它为false   ;

a && b  的运算就是  :因为  a  和  b全是非零整数,所以  a  和  b  也就是true ,而   true && true  返回   true   ,则取后面的b   ,同理  a 和b 全是非零整数,a  和b  全是true  ,则true || true  返回   true ,取|| 前面的值 也就是2;

 

同样:该逻辑运算符支持短路原则:

如 var  a = “”  ||  null  || 3  ||  4   —->    var a = fasel || false || true ||  true  结果为true  则返回第一个true,即是3

var b = 4 && 5 && null && 0   ——>   var b = true && true && false && false   结果是false   则返回第一个false   即是null .

Category: JavaScr
http协议之 文件上传数据包 http http://www.cnblogs.com/linjiqin/archive/2011/11/09/2242579.html
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.Socket;
import java.net.URL;
import java.util.Map;

/**
 * 上传文件到服务器
 * 
 * @author Administrator
 *
 */
public class SocketHttpRequester {
    /**
     * 直接通过HTTP协议提交数据到服务器,实现如下面表单提交功能:
     *   <FORM METHOD=POST ACTION="http://192.168.1.101:8083/upload/servlet/UploadServlet" enctype="multipart/form-data">
            <INPUT TYPE="text" NAME="name">
            <INPUT TYPE="text" NAME="id">
            <input type="file" name="imagefile"/>
            <input type="file" name="zip"/>
         </FORM>
     * @param path 上传路径(注:避免使用localhost或127.0.0.1这样的路径测试,因为它会指向手机模拟器,你可以使用http://www.iteye.cn或http://192.168.1.101:8083这样的路径测试)
     * @param params 请求参数 key为参数名,value为参数值
     * @param file 上传文件
     */
    public static boolean post(String path, Map<String, String> params, FormFile[] files) throws Exception{     
        final String BOUNDARY = "---------------------------7da2137580612"; //数据分隔线
        final String endline = "--" + BOUNDARY + "--\r\n";//数据结束标志
        
        int fileDataLength = 0;
        for(FormFile uploadFile : files){//得到文件类型数据的总长度
            StringBuilder fileExplain = new StringBuilder();
             fileExplain.append("--");
             fileExplain.append(BOUNDARY);
             fileExplain.append("\r\n");
             fileExplain.append("Content-Disposition: form-data;name=\""+ uploadFile.getParameterName()+"\";filename=\""+ uploadFile.getFilname() + "\"\r\n");
             fileExplain.append("Content-Type: "+ uploadFile.getContentType()+"\r\n\r\n");
             fileExplain.append("\r\n");
             fileDataLength += fileExplain.length();
            if(uploadFile.getInStream()!=null){
                fileDataLength += uploadFile.getFile().length();
             }else{
                 fileDataLength += uploadFile.getData().length;
             }
        }
        StringBuilder textEntity = new StringBuilder();
        for (Map.Entry<String, String> entry : params.entrySet()) {//构造文本类型参数的实体数据
            textEntity.append("--");
            textEntity.append(BOUNDARY);
            textEntity.append("\r\n");
            textEntity.append("Content-Disposition: form-data; name=\""+ entry.getKey() + "\"\r\n\r\n");
            textEntity.append(entry.getValue());
            textEntity.append("\r\n");
        }
        //计算传输给服务器的实体数据总长度
        int dataLength = textEntity.toString().getBytes().length + fileDataLength +  endline.getBytes().length;
        
        URL url = new URL(path);
        int port = url.getPort()==-1 ? 80 : url.getPort();
        Socket socket = new Socket(InetAddress.getByName(url.getHost()), port);           
        OutputStream outStream = socket.getOutputStream();
        //下面完成HTTP请求头的发送
        String requestmethod = "POST "+ url.getPath()+" HTTP/1.1\r\n";
        outStream.write(requestmethod.getBytes());
        String accept = "Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*\r\n";
        outStream.write(accept.getBytes());
        String language = "Accept-Language: zh-CN\r\n";
        outStream.write(language.getBytes());
        String contenttype = "Content-Type: multipart/form-data; boundary="+ BOUNDARY+ "\r\n";
        outStream.write(contenttype.getBytes());
        String contentlength = "Content-Length: "+ dataLength + "\r\n";
        outStream.write(contentlength.getBytes());
        String alive = "Connection: Keep-Alive\r\n";
        outStream.write(alive.getBytes());
        String host = "Host: "+ url.getHost() +":"+ port +"\r\n";
        outStream.write(host.getBytes());
        //写完HTTP请求头后根据HTTP协议再写一个回车换行
        outStream.write("\r\n".getBytes());
        //把所有文本类型的实体数据发送出来
        outStream.write(textEntity.toString().getBytes());           
        //把所有文件类型的实体数据发送出来
        for(FormFile uploadFile : files){
            StringBuilder fileEntity = new StringBuilder();
             fileEntity.append("--");
             fileEntity.append(BOUNDARY);
             fileEntity.append("\r\n");
             fileEntity.append("Content-Disposition: form-data;name=\""+ uploadFile.getParameterName()+"\";filename=\""+ uploadFile.getFilname() + "\"\r\n");
             fileEntity.append("Content-Type: "+ uploadFile.getContentType()+"\r\n\r\n");
             outStream.write(fileEntity.toString().getBytes());
             if(uploadFile.getInStream()!=null){
                 byte[] buffer = new byte[1024];
                 int len = 0;
                 while((len = uploadFile.getInStream().read(buffer, 0, 1024))!=-1){
                     outStream.write(buffer, 0, len);
                 }
                 uploadFile.getInStream().close();
             }else{
                 outStream.write(uploadFile.getData(), 0, uploadFile.getData().length);
             }
             outStream.write("\r\n".getBytes());
        }
        //下面发送数据结束标志,表示数据已经结束
        outStream.write(endline.getBytes());
        
        BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
        if(reader.readLine().indexOf("200")==-1){//读取web服务器返回的数据,判断请求码是否为200,如果不是200,代表请求失败
            return false;
        }
        outStream.flush();
        outStream.close();
        reader.close();
        socket.close();
        return true;
    }
    
    /**
     * 提交数据到服务器
     * @param path 上传路径(注:避免使用localhost或127.0.0.1这样的路径测试,因为它会指向手机模拟器,你可以使用http://www.itcast.cn或http://192.168.1.10:8080这样的路径测试)
     * @param params 请求参数 key为参数名,value为参数值
     * @param file 上传文件
     */
    public static boolean post(String path, Map<String, String> params, FormFile file) throws Exception{
       return post(path, params, new FormFile[]{file});
    }
}
sencha touch 入门学习资料大全2014 sencha http://www.cnblogs.com/mlzs/p/3468893.html
现在sencha touch已经更新到2.3.1a版本了 重新整理一下资料

官方网站:http://www.sencha.com/products/touch/

在线文档:http://docs.sencha.com/touch/2.3.1/

中文在线翻译文档:http://touch.scsn.gov.cn/

官方论坛:http://www.sencha.com/forum/

官方sdk下载页:http://www.sencha.com/products/touch/download/

官方cmd下载页:http://www.sencha.com/products/sencha-cmd/download

官网离线文档下载页:http://docs.sencha.com/

官方在线演示:http://try.sencha.com/

官方app演示:http://www.sencha.com/apps/

官方扩展演示:https://market.sencha.com/extensions

 

推荐必读文章,json和jsonp

http://www.cnblogs.com/dowinning/archive/2012/04/19/json-jsonp-jquery.html

推荐一个前端博客,博主语言很风趣生动

http://www.zhangxinxu.com/wordpress/

 

对cmd使用可以看看官方的api,也可以看看这几个博客的文章

香辣牛肉面的入门系列

http://www.cnblogs.com/cjpx00008/category/533356.html

手把手教你入门st系列

http://blog.csdn.net/helem_2013/article/category/1789693 

st组件使用,布局详解系列

http://www.cnblogs.com/html5mob/

st入门翻译系列

http://blog.csdn.net/yax405/article/category/1534531

互联网fans学习笔记系列

http://www.cnblogs.com/qqloving/tag/sencha%20touch/

st入门文档下载页

http://www.kuaipan.cn/file/id_2260729050803414.htm

st相关理论系列

http://www.cnblogs.com/dowinning/category/352873.html

阿赛深入扩展系列

http://sailei1.iteye.com/

zijun的博客

http://www.cnblogs.com/zijun/

avengang的博客,有一些目测不错的扩展

http://my.oschina.net/u/259577/blog?catalog=454564

我的st与cordova生成打包系列

http://www.cnblogs.com/mlzs/tag/sencha%20touch%E4%B8%8Ecordova/

我的st技术分享系列

http://www.cnblogs.com/mlzs/tag/sencha%20touch/

st相关其他汇总

http://www.cnblogs.com/guyoung/archive/2012/05/20/5008-.html

我的初学demo(2.0)

http://files.cnblogs.com/mlzs/www.rar

我的深入demo(2.2)

http://www.cnblogs.com/mlzs/p/3382229.html
Spring+Hibernate生成数库表的方式 s2sh
            private boolean createTable(Map map) {//map为存放 hbm.xml的相对路径
		logger.info("create table start...");
		Collection collection = map.values();
		if (collection != null && collection.size() > 0) {
			DataSource datasource = (DataSource) AppUtil.getBean("dataSource");
			LocalSessionFactoryBean localsessionfactorybean = new LocalSessionFactoryBean();
			localsessionfactorybean.setDataSource(datasource);
			Properties properties = new Properties();
			properties.setProperty("connection.useUnicode", "true");
			properties.setProperty("connection.characterEncoding", "utf-8");
			properties.setProperty("hibernate.dialect", hibernateDialect);
			properties.setProperty("hibernate.show_sql", "true");
			properties.setProperty("hibernate.jdbc.batch_size", "20");
			properties.setProperty("hibernate.jdbc.fetch_size", "20");
			properties.setProperty("hibernate.cache.provider_class", "org.hibernate.cache.EhCacheProvider");
			properties.setProperty("hibernate.cache.use_second_level_cache", "true");
			properties.setProperty("hibernate.hbm2ddl.auto", "update");
			localsessionfactorybean.setHibernateProperties(properties);
			FileSystemResource afilesystemresource[] = new FileSystemResource[collection.size()];
			int i = 0;
			for (Iterator iterator = collection.iterator(); iterator.hasNext();) {
				String s = (String) iterator.next();
				afilesystemresource[i++] = new FileSystemResource(s);
			}

			localsessionfactorybean.setMappingLocations(afilesystemresource);
			try {
				localsessionfactorybean.afterPropertiesSet();
				return true;
			} catch (Exception exception) {
				logger.info((new StringBuilder()).append("\t\u66F4\u65B0xml\u6587\u4EF6\u9519\u8BEF:").append(exception.getMessage()).toString());
			}
			deleteDynaModelMap(map);//清除记录
			return false;
		} else {
			logger.info("\t\u6CA1\u6709\u8981\u66F4\u65B0\u7684xml\u6587\u4EF6!");
			return false;
		}
	}
Global site tag (gtag.js) - Google Analytics