git memo

過去のコミットふくめて検索する
git grep $(git rev-list –all)

docker 0.7.2


* docker 0.7.2 がリリースされました
* https://github.com/dotcloud/docker/blob/master/CHANGELOG.md
* 以下の変更がはいったそうなので、追いかけてみました
* Drop capabilities from within dockerinit
* https://github.com/dotcloud/docker/pull/3015
* Set hostname and IP address from within dockerinit
* https://github.com/dotcloud/docker/pull/3201
* いままで、config.lxcで行っていた、Linux Capabilityのdropを、dockerinitで行うようになったようです。lxcのテンプレートからCapabilityの設定が消え、dockerinitコマンド内部でcapabilityのdropが行われるようになりました。
* それに伴い、ネットワークの設定とホスト名の設定も、dockerinitで行うようになったようです。dockerinitに引数でわたすようになりました。
* container.go

// Networking
if !container.Config.NetworkDisabled {
network := container.NetworkSettings
params = append(params,
"-g", network.Gateway,
"-i", fmt.Sprintf("%s/%d", network.IPAddress, network.IPPrefixLen),
)
}

OpenStackのKVMインスタンスにマウント出来るVolume数について


* 検証中のOpenStackのKVM InstanceにどれだけVolumeがAttachできるか調べてみたところ、/dev/vda から /dev/vdaa の 27個で頭打ちだった
* nova volume-attachコマンドでそれ以上attachしても特にエラーは帰ってこないが、実際にはInstanceにAttachされておらず、nova-compute.logにはlibvirtError: internal error No more available PCI addressesと吐き出されている
* そのInstance上でlspciしてみたらこんな感じ
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:06.0 RAM memory: Red Hat, Inc Virtio memory balloon
00:07.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:08.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:09.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:0a.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:0b.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:0c.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:0d.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:0e.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:0f.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:10.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:11.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:12.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:13.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:14.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:15.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:16.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:17.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:18.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:19.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:1a.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:1b.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:1c.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:1d.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:1e.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:1f.0 SCSI storage controller: Red Hat, Inc Virtio block device

* PCIのデバイス番号は0x00 – 0x1fまでなので、それにあたってるのねー
* ついでにQemuのPCIについてみておく
* hw/pci/pci.c
* PCIは[[:]:].という感じに表記される
/*
* Parse [[:]:], return -1 on error if funcp == NULL
* [[:]:]., return -1 on error
*/
int pci_parse_devaddr(const char *addr, int *domp, int *busp,
unsigned int *slotp, unsigned int *funcp)
{

* domainは0以外はうけつけない – PCIBus *pci_get_bus_devfn(int *devfnp, PCIBus *root, const char *devaddr)
if (dom != 0) {
fprintf(stderr, "No support for non-zero PCI domains\n");
return NULL;
}

* bus,slot,funcの値についての制限
if (dom > 0xffff || bus > 0xff || slot > 0x1f || func > 7)
return -1;

* さて、QemuでPCIバスを増やすことはできるんだろうか?

docker 0.7


* lxcを便利に使用できるdockerの0.7がでていますね。
* 0.7からは aufs以外でも使えるようになりました。
* ストレージドライバは aufs, devicemapper, vfsの順にプライオリティがつけられているようです
* graphdriver/driver.go
var (
DefaultDriver string
// All registred drivers
drivers map[string]InitFunc
// Slice of drivers that should be used in an order
priority = []string{
"aufs",
"devicemapper",
"vfs",
}
)

* ストレージドライバは、環境変数 DOCKER_DRIVERを設定することで切り替えることができるようですexport DOCKER_DRIVER=vfs

* devicemapper利用時のサイズは以下のように定義されているようです
*graphdriver/devmapper/deviceset.go
var (↲
»---DefaultDataLoopbackSize int64 = 100 * 1024 * 1024 * 1024↲
»---DefaultMetaDataLoopbackSize int64 = 2 * 1024 * 1024 * 1024↲
»---DefaultBaseFsSize uint64 = 10 * 1024 * 1024 * 1024↲
)↲

* devicemapperで作られるvolumeは以下の感じで作られるようです
// This is the programmatic example of "dmsetup create"↲
func createPool(poolName string, dataFile, metadataFile *osFile) error {↲
»---task, err := createTask(DeviceCreate, poolName)↲
»---if task == nil {↲
»---»---return err↲
»---}↲

»---size, err := GetBlockDeviceSize(dataFile)↲
»---if err != nil {↲
»---»---return fmt.Errorf("Can't get data size")↲
»---}↲

»---params := metadataFile.Name() + " " + dataFile.Name() + " 128 32768"↲
»---if err := task.AddTarget(0, size/512, "thin-pool", params); err != nil {↲
»---»---return fmt.Errorf("Can't add target")↲
»---}↲

»---var cookie uint = 0↲
»---if err := task.SetCookie(&cookie, 0); err != nil {↲
»---»---return fmt.Errorf("Can't set cookie")↲
»---}↲

»---if err := task.Run(); err != nil {↲
»---»---return fmt.Errorf("Error running DeviceCreate (createPool)")↲
»---}↲

»---UdevWait(cookie)↲

»---return nil↲
}↲

* data_block_size 128
* low_water_mark 32768

* thin-provisioning.txtによるとUsing an existing pool device
-----------------------------

dmsetup create pool \
--table "0 20971520 thin-pool $metadata_dev $data_dev \
$data_block_size $low_water_mark"

$data_block_size gives the smallest unit of disk space that can be
allocated at a time expressed in units of 512-byte sectors. People
primarily interested in thin provisioning may want to use a value such
as 1024 (512KB). People doing lots of snapshotting may want a smaller value
such as 128 (64KB). If you are not zeroing newly-allocated data,
a larger $data_block_size in the region of 256000 (128MB) is suggested.
$data_block_size must be the same for the lifetime of the
metadata device.

$low_water_mark is expressed in blocks of size $data_block_size. If
free space on the data device drops below this level then a dm event
will be triggered which a userspace daemon should catch allowing it to
extend the pool device. Only one such event will be sent.
Resuming a device with a new table itself triggers an event so the
userspace daemon can use this to detect a situation where a new table
already exceeds the threshold.
とのことなので、最初に設定したThin-pool 100GBのうち、容量が32768 * (512*512)Byte = 8MBytes を切ると、拡張がされるように思います(動作は未確認)
* このあたりは決め打ちで変更はできないようですね

openstack memo


* quantum(neutron)のquota driverの設定
# default driver to use for quota checks
# quota_driver = quantum.quota.ConfDriver
quota_driver = quantum.db.quota_db.DbQuotaDriver

* あててるぱっち
* IETドライバ使用時にでる不具合を修正するパッチ
* https://review.openstack.org/#/c/27938/2/cinder/volume/iscsi.py
* https://bugs.launchpad.net/nova/+bug/1200249

grub2のgrub-mkconfigにlabelベースな設定を書かせるメモ


* 2013/08/18 挙動がかわったようなので更新しました
* /etc/grub/10_linux
--- 10_linux.backup 2013-06-21 02:20:47.436531219 +0900
+++ 10_linux 2013-06-22 18:53:47.000000000 +0900
@@ -136,8 +136,6 @@
fi

message="$(gettext_printf "Loading Linux %s ..." "${version}")"
+ auto_label="`e2label ${GRUB_DEVICE} 2>/dev/null`"
+ linux_root_device_thisversion="LABEL=${auto_label}"
sed "s/^/$submenu_indentation/" << EOF echo '$message' linux ${rel_dirname}/${basename} root=${linux_root_device_thisversion} ro ${args}

* /usr/share/grub/grub-mkconfig_lib
--- grub-mkconfig_lib.backup 2013-06-21 02:35:50.038459949 +0900
+++ grub-mkconfig_lib 2013-06-22 18:53:46.000000000 +0900
@@ -146,22 +146,20 @@
done
fi

- # If there's a filesystem UUID that GRUB is capable of identifying, use it;
- # otherwise set root as per value in device.map.
- fs_hint="`"${grub_probe}" --device "${device}" --target=compatibility_hint`"
- if [ "x$fs_hint" != x ]; then
- echo "set root='$fs_hint'"
- fi
- if fs_uuid="`"${grub_probe}" --device "${device}" --target=fs_uuid 2> /dev/null`" ; then
- hints="`"${grub_probe}" --device "${device}" --target=hints_string 2> /dev/null`" || hints=
- echo "if [ x\$feature_platform_search_hint = xy ]; then"
- echo " search --no-floppy --fs-uuid --set=root ${hints} ${fs_uuid}"
- echo "else"
- echo " search --no-floppy --fs-uuid --set=root ${fs_uuid}"
- echo "fi"
- fi
+# # If there's a filesystem UUID that GRUB is capable of identifying, use it;
+# # otherwise set root as per value in device.map.
+# fs_hint="`"${grub_probe}" --device "${device}" --target=compatibility_hint`"
+# if [ "x$fs_hint" != x ]; then
+# echo "set root='$fs_hint'"
+# fi
+# if fs_uuid="`"${grub_probe}" --device "${device}" --target=fs_uuid 2> /dev/null`" ; then
+# hints="`"${grub_probe}" --device "${device}" --target=hints_string 2> /dev/null`" || hints=
+# echo "if [ x\$feature_platform_search_hint = xy ]; then"
+# echo " search --no-floppy --fs-uuid --set=root ${hints} ${fs_uuid}"
+# echo "else"
+# echo " search --no-floppy --fs-uuid --set=root ${fs_uuid}"
+# echo "fi"
+# fi
+auto_label="`e2label "$@" 2>/dev/null`"↲
+echo "search --no-floppy --label ${auto_label} --set root"↲
}

grub_get_device_id ()

Universal Media Server Build memo


* git clone https://code.google.com/p/universal-media-server/
* git clone https://github.com/UniversalMediaServer/UniversalMediaServer.git
* src/main/java/net/pms/network/UPNPHelper.java の public class UPNPHelper の sendAliveのdelay間隔を好きなように調整

mvn com.savage7.maven.plugins:maven-external-dependency-plugin:resolve-external
mvn com.savage7.maven.plugins:maven-external-dependency-plugin:install-external
mvn clean package

yum-plugin-priorities

CentOSを触る度にわすれるのでメモ

yum-plugin-priorityのデフォルト値は99のようなので、epelとかのrepo設定にpriority=100と書いとくだけで、優先度さがります

network namespace

network namespace機能で遊んでみた

ip netns add test01
ip netns list
ip netns exec test01 ifconfig -a
ip link add name vethint type veth peer name veth-ext
ip link set veth-ext netns test01
ip netns exec test01 ifconfig -a
ip netns exec test01 ip link set veth-ext up
ip netns exec test01 ip addr add 192.168.3.11/24 dev veth-ext
ip link set dev vethint up
ip addr add 192.168.3.12/24 dev vethint
ip route add 192.168.3.0/24 dev vethint
ping 192.168.3.11
ip netns exec test01 /usr/sbin/sshd
ssh 192.168.3.11