HowTo: Fedora 20 and the FOSS xorg-x11-drv-ati driver

I’ve been having this problem lately:

My Fedora 20 would freeze; after a while, while using the xorg-x11-drv-ati driver.

Another thing I noticed, was the fan of the card going up and up like crazy.

First thing I did, was install lm_sensors:

yum -y install lm_sensors

And ran the sensors command:

$ sensors
radeon-pci-0100
Adapter: PCI adapter
temp1:        +85.5°C  (crit = +120.0°C, hyst = +90.0°C)

k10temp-pci-00c3
Adapter: PCI adapter
temp1:        +38.6°C  (high = +70.0°C)
                       (crit = +90.0°C, hyst = +85.0°C)

Wow! 75.5 ºC on my AMD/Ati card! I know why the freezes happen now! It overheats and crashes!

So, I’ll do what you need to do when you have problems: RTFM!

http://xorg.freedesktop.org/wiki/RadeonFeature/#index3h2

Now, there is a hint right there. You can try and control the power profile; but that is not exactly it. You want the system to do it for you. My answer was enabling dpm; by adding radeon.dpm=1 to the kernel line at /etc/grub2-efi.conf (in my case).

So, it looks something like this:

        linuxefi /vmlinuz-3.12.9-301.fc20.x86_64 root=UUID=6c9b3ffd-c911-4dcb-9425-e44841ab379d ro rootflags=subvol=root vconsole.font=latarcyrheb-sun16  rhgb quiet LANG=en_US.UTF-8 radeon.dpm=1
        initrdefi /initramfs-3.12.9-301.fc20.x86_64.img

Ok, now, the next thing is to rebuild my initrd image with dracut; which proved to be really easy:

su - 
dracut --force

Basically, if you don’t do that, you will be dropped to a shell when booting; just exit that and you’re good.


HowTo: cgminer as systemd service

Ok, I have been setting up my CubieTruck up with Fedora to be able to control my Butterflylabs hardware.

Obviously, I had to compile cgminer, which is straightforward; as mentioned here.

So, I want the CubieTruck to start cgminer, automatically, as a systemd process.

I need a service unit:

# /etc/systemd/system/cgminer.service 
[Unit]
Description=Start mining bitcoin
After=syslog.target network.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/local/bin/cgminer --syslog --real-quiet -o stratum+tcp://stratum.mining.eligius.st:3334 -u 1K7FEkbvBtMvtyT7S3RrPjuqWuJWfgUZBa -p somepassword -l 9

[Install]
WantedBy=multi-user.target

And there it is! Next, just activate it to start on boot and start it:

su -c 'systemctl enable cgminer.service'
sy -c 'systemctl start cgminer.service'

If you wanna check out the output of cgminer; which is a lot, btw, you need journalctl:

su -c 'journalctl -u cgminer -f'

I recommend setting some sane limits on your journalctl.conf file:

/etc/systemd/journald.conf 
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# See journald.conf(5) for details

[Journal]
#Storage=auto
Compress=yes
#Seal=yes
#SplitMode=login
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
SystemMaxUse=100M
#SystemKeepFree=
SystemMaxFileSize=20M
RuntimeMaxUse=100M
#RuntimeKeepFree=
RuntimeMaxFileSize=20M
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info

Now, I am no guru or anything like that. I might’ve missed some stuff. Keep the feedback coming, dudesses and dudes!


HowTo: Viewing documentation on GNU & Linux 2

Ok, so, before I go to bed, I wanted to share a quick tip with you.

Sometimes, you wanna check a man page on the CLI. AFAIK, you have two alternatives:

# man
man rsync

# info
info rsync

These are, both, pretty convenient and useful. But, sometimes, you wanna check it on a graphical environment.

Here are some options:

man --html=firefox bash
yelp man:bash
yelp info:bash

My favorite: yelp info:bash

Have fun!


Two big ideas: the biometric helper and the auto-music video app 2

Ok, today, I had two big ideas.

Speaking with Gabriel, I came up with a mini device; pluggable to the computer via USB, a PAN or whatever. The device would be a medical device that you attach to your finger. It would take various metrics; including temperature, pressure, pulse and, if necessary, it would pinch your finger and read the most basic chemistry in your blood (glucose, acidity, etc).

The device should be as low cost as possible. This device would enable a doctor, via Google Helpouts, to read the metrics in real time and let you know if you’re seriously ill or not.

This device would enable anybody in the world with an internet connection to consult a doctor.

The other idea is a more artistic one. There should be a web app or web service where, a lazy musician, could upload his/her songs and, by gathering a lot of information from the song (frequency, tempo, key, lyrics, etc), the app would generate a video, with closed caption.

The video wouldn’t be random. The info gathered from the song, would give the app enough info for it to gather Creative Commons pictures and video; rendering an re-editable video (by you, obviously), in tempo with the song and with images that have something to do with the lyrics, the sound and the mood of the song. Maybe, even identify a basic style (blues, rock, jazz, etc).

Those are two good ideas if I may say so myself ;)


MariaDB + Galera: Regazón a la hora de sincronizar (SSI) 3

Hoy tuvimos una caida del cluster por un tiempo bastante considerable (fueron al rededor de 30-45 minutos creo).

El problema fue que hice un cambio en la configuración del cluster para que usase America/Mexico_City como timezone.

El cluster no lo aceptó y, al intentar reiniciar 2 de los nodos, no aceptaron la interconexión más.

El procedimiento fue resetear los nodos; lo cual nos llevó a que el nodo restante se bloqueara; estando documentado que cuando hay sincronización completa (SSI), el “donor” o donante, se bloquea.

Siendo el nodo restante el único, procedió a bloquearse y nos dejó sin DB.

Esto sucedió por mi desconocimiento del bloqueo del nodo donante a la hora de sincronizar. Pudo haberse evitado si hubiese hecho la configuración en un solo nodo, reiniciado el nodo y, al fallar, lo hubiese re-syncronizado, hubiese, todavía, quedado un nodo disponible para el uso.

Lo explico porque, en futuras ocaciones, lo que hay que hacer es: actualiza y configura un nodo a la vez. Si falla, puedes re-syncronizarlo sin problemas de downtime. Además, haz ésto en altas horas de la madrugada para que el patrón ni cuenta se dé ;)