Discussion:
Problem with high cpu load (oi_151a)
(too old to reply)
Gernot Wolf
2011-10-20 16:44:28 UTC
Permalink
Hello all,

I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.

So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.

This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).

I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.

So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.

I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were >1MB).

One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).

The machine is a custom build medium tower:
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)

I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.

Anyone any ideas...?

Regards,
Gernot Wolf
Ken Gunderson
2011-10-20 16:52:05 UTC
Permalink
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were >1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
One. But I haven't even made cursory scan of your logs. Nor do I know
if it is problem any longer but I used to see similar with bge driver.
Worked great on OS 2009.06 but subsequent stuff horked it up somehow.
My solution was to use the other nvidia based interface. I think there
is/was a bug file on Illumos Redmine? Others will surely know more but
quick test you can do is to try a different network driver and see if
the problem goes away.

hth-- Ken
--
Regards-- Ken Gunderson
Gernot Wolf
2011-10-20 17:34:59 UTC
Permalink
Wow, that was fast :)

However, the NIC integrated on the Intel DG965WHMKR mainbord is an Intel
82566DC according to the device driver utility, the reported driver
e1000g. Isn't the bge driver for Broadcom NICs?

And what do you mean by "the other nvidia based interface"? This
mainboard has only this one integrated network interface mentioned
above, no pieces of nvidia hardware anywhere, as far as I can see...

Nevertheless it could be worth to try a different network driver.
However, as I mentioned in my previous post, my know-how concerning unix
is very limited. Where can I find an alternative driver for the Intel
82566DC interface and how do I install it?

Anyway, thanks for your quick response! :)

Regards,
Gernot Wolf
Post by Ken Gunderson
One. But I haven't even made cursory scan of your logs. Nor do I know
if it is problem any longer but I used to see similar with bge driver.
Worked great on OS 2009.06 but subsequent stuff horked it up somehow.
My solution was to use the other nvidia based interface. I think there
is/was a bug file on Illumos Redmine? Others will surely know more but
quick test you can do is to try a different network driver and see if
the problem goes away.
hth-- Ken
Ken Gunderson
2011-10-20 17:44:08 UTC
Permalink
Post by Gernot Wolf
Wow, that was fast :)
Just caught me with the morning coffee email review.
Post by Gernot Wolf
However, the NIC integrated on the Intel DG965WHMKR mainbord is an Intel
82566DC according to the device driver utility, the reported driver
e1000g. Isn't the bge driver for Broadcom NICs?
And like I said, I didn't even scan the logs. Just quick idea off top
of my head based on similar behavior. Could well be entirely different
underlying cause. Yes, the bge is broadcom but I think there have been
issues with other nics.
Post by Gernot Wolf
And what do you mean by "the other nvidia based interface"? This
mainboard has only this one integrated network interface mentioned
above, no pieces of nvidia hardware anywhere, as far as I can see...
I didn't mean to imply that I had the same mainboard you reported. I
have a few different nics lying around so for me this was easy test: bge
-> high cpu load under 'idle'; nvidia -> goes away.
Post by Gernot Wolf
Nevertheless it could be worth to try a different network driver.
However, as I mentioned in my previous post, my know-how concerning unix
is very limited. Where can I find an alternative driver for the Intel
82566DC interface and how do I install it?
Is this laptop of desktop? If latter, can you just try a different
card? If so, OI should automatically detect and load appropriate
driver.
Post by Gernot Wolf
Anyway, thanks for your quick response! :)
Np. Sorry is wasn't much help. But I see others are putting you on
diagnostic dtrace track so I'll bow out now. Good luck :)
--
Regards-- Ken Gunderson
Gernot Wolf
2011-10-20 18:17:13 UTC
Permalink
Post by Ken Gunderson
Post by Gernot Wolf
Wow, that was fast :)
Just caught me with the morning coffee email review.
Well, I just had a nice dinner :)
Post by Ken Gunderson
Post by Gernot Wolf
However, the NIC integrated on the Intel DG965WHMKR mainbord is an Intel
82566DC according to the device driver utility, the reported driver
e1000g. Isn't the bge driver for Broadcom NICs?
And like I said, I didn't even scan the logs. Just quick idea off top
of my head based on similar behavior. Could well be entirely different
underlying cause. Yes, the bge is broadcom but I think there have been
issues with other nics.
Oh, ic :)
Post by Ken Gunderson
Post by Gernot Wolf
And what do you mean by "the other nvidia based interface"? This
mainboard has only this one integrated network interface mentioned
above, no pieces of nvidia hardware anywhere, as far as I can see...
I didn't mean to imply that I had the same mainboard you reported. I
have a few different nics lying around so for me this was easy test: bge
-> high cpu load under 'idle'; nvidia -> goes away.
Well, you certainly have a point here.
Post by Ken Gunderson
Post by Gernot Wolf
Nevertheless it could be worth to try a different network driver.
However, as I mentioned in my previous post, my know-how concerning unix
is very limited. Where can I find an alternative driver for the Intel
82566DC interface and how do I install it?
Is this laptop of desktop? If latter, can you just try a different
card? If so, OI should automatically detect and load appropriate
driver.
Desktop. Good idea. I have this prehistoric PC here at my place, I think
I can try and put it's NIC into my OI box. But that's for tomorrow :)
Post by Ken Gunderson
Post by Gernot Wolf
Anyway, thanks for your quick response! :)
Np. Sorry is wasn't much help. But I see others are putting you on
diagnostic dtrace track so I'll bow out now. Good luck :)
Thx :) I'm curious what they will get from the results I posted. Have a
nice day! :)

Regards,
Gernot Wolf
Michael Stapleton
2011-10-20 17:22:39 UTC
Permalink
Hi Gernot,

You have a high context switch rate.

try
#dtrace -n 'sched:::off-cpu { @[execname]=count()}'

For a few seconds to see if you can get the name of and executable.

Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were >1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 17:52:27 UTC
Permalink
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...

Anyway, here are the results of the dtrace command (I executed the
command twice, hence two result sets):

***@tintenfass:~# dtrace -n 'sched:::off-cpu { @[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C

ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078

***@tintenfass:~# dtrace -n 'sched:::off-cpu { @[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C

automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939

So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?

Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 18:07:17 UTC
Permalink
That rules out userland.

Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Schuster
2011-10-20 18:17:34 UTC
Permalink
Gernot,

is there anything suspicious in /var/adm/messages?

Michael

On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
   ipmgmtd                                                           1
   gconfd-2                                                          2
   gnome-settings-d                                                  2
   idmapd                                                            2
   inetd                                                             2
   miniserv.pl                                                       2
   netcfgd                                                           2
   nscd                                                              2
   ospm-applet                                                       2
   ssh-agent                                                         2
   sshd                                                              2
   svc.startd                                                        2
   intrd                                                             3
   afpd                                                              4
   mdnsd                                                             4
   gnome-power-mana                                                  5
   clock-applet                                                      7
   sendmail                                                          7
   xscreensaver                                                      7
   fmd                                                               9
   fsflush                                                          11
   ntpd                                                             11
   updatemanagernot                                                 13
   isapython2.6                                                     14
   devfsadm                                                         20
   gnome-terminal                                                   20
   dtrace                                                           23
   mixer_applet2                                                    25
   smbd                                                             39
   nwam-manager                                                     60
   svc.configd                                                      79
   Xorg                                                            100
   sched                                                        394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
   automountd                                                        1
   ipmgmtd                                                           1
   idmapd                                                            2
   in.routed                                                         2
   init                                                              2
   miniserv.pl                                                       2
   netcfgd                                                           2
   ssh-agent                                                         2
   sshd                                                              2
   svc.startd                                                        2
   fmd                                                               3
   hald                                                              3
   inetd                                                             3
   intrd                                                             3
   hald-addon-acpi                                                   4
   nscd                                                              4
   gnome-power-mana                                                  5
   sendmail                                                          5
   mdnsd                                                             6
   devfsadm                                                          8
   xscreensaver                                                      9
   fsflush                                                          10
   ntpd                                                             14
   updatemanagernot                                                 16
   mixer_applet2                                                    21
   isapython2.6                                                     22
   dtrace                                                           24
   gnome-terminal                                                   24
   smbd                                                             39
   nwam-manager                                                     58
   zpool-rpool                                                      65
   svc.configd                                                      79
   Xorg                                                             82
   sched                                                        369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
--
Michael Schuster
http://recursiveramblin
Gernot Wolf
2011-10-20 19:09:42 UTC
Permalink
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.

I've zipped it an attached it to this mail, maybe someone can get
anything out of it...

Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 19:20:32 UTC
Permalink
Ooops, something went wrong with my attachement. I'll try again...

Regards,
Gernot Wolf
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 19:37:07 UTC
Permalink
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...

Regards,
Gernot Wolf
Post by Gernot Wolf
Ooops, something went wrong with my attachement. I'll try again...
Regards,
Gernot Wolf
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
James Carlson
2011-10-20 19:38:49 UTC
Permalink
Post by Gernot Wolf
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Probably just because it's huge. Try "tail -100 /var/adm/messages".
It's likely that if there's something going nuts on your system,
there'll be enough log-spam to identify it.
--
James Carlson 42.703N 71.076W <***@workingcode.com>
Gernot Wolf
2011-10-20 20:04:44 UTC
Permalink
Well, I zipped it, the zipfile is just 211K? Shouldn't be a problem, I
think...

Regards,
Gernot Wolf
Post by James Carlson
Post by Gernot Wolf
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Probably just because it's huge. Try "tail -100 /var/adm/messages".
It's likely that if there's something going nuts on your system,
there'll be enough log-spam to identify it.
Michael Stapleton
2011-10-20 19:55:33 UTC
Permalink
Probably just too big.

Are there any ACPI settings in the BIOS?

or we can try to change ACPI in OI.

#man eeprom
.
.
.
OPERANDS
x86 Only
acpi-user-options

A configuration variable that controls the use of
Advanced Configuration and Power Interface (ACPI), a
power management specification. The acceptable values
for this variable depend on the release of the Solaris
operating system you are using.

For all releases of Solaris 10 and Solaris 11, a value
of of 0x0 means that there will be an attempt to use
ACPI if it is available on the system. A value of 0x2
disables the use of ACPI.

For the Solaris 10 1/06 release, a value of 0x8 means
that there will be an attempt to use ACPI in a mode com-
patible with previous releases of Solaris 10 if it is
available on the system. The default for Solaris 10 1/06
is 0x8.

For releases of Solaris 10 after the 1/06 release and
for Solaris 11, the default is 0x0.

Most users can safely accept the default value, which
enables ACPI if available. If issues related to the use
of ACPI are suspected on releases of Solaris after
Solaris 1/06, it is suggested to first try a value of
0x8 and then, if you do not obtain satisfactory results,
0x02.


Want to try:
#eeprom acpi-user-options=0x8
# init 6

?


If you have booting problems after changes, the following link will
help:

Boot Arguments You Can Specify When Editing the GRUB Menu at Boot Time
-B acpi-user-options=0x2


Disables ACPI entirely.


http://dlc.sun.com/osol/docs/content/SYSADV1/getov.html



Mike
Post by Gernot Wolf
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Regards,
Gernot Wolf
Post by Gernot Wolf
Ooops, something went wrong with my attachement. I'll try again...
Regards,
Gernot Wolf
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 20:42:35 UTC
Permalink
Ok, I'll try that tomorrow. Too late to try anything that might result
in my box having booting problems ;)

Regards,
Gernot Wolf
Post by Michael Stapleton
Probably just too big.
Are there any ACPI settings in the BIOS?
or we can try to change ACPI in OI.
#man eeprom
.
.
.
OPERANDS
x86 Only
acpi-user-options
A configuration variable that controls the use of
Advanced Configuration and Power Interface (ACPI), a
power management specification. The acceptable values
for this variable depend on the release of the Solaris
operating system you are using.
For all releases of Solaris 10 and Solaris 11, a value
of of 0x0 means that there will be an attempt to use
ACPI if it is available on the system. A value of 0x2
disables the use of ACPI.
For the Solaris 10 1/06 release, a value of 0x8 means
that there will be an attempt to use ACPI in a mode com-
patible with previous releases of Solaris 10 if it is
available on the system. The default for Solaris 10 1/06
is 0x8.
For releases of Solaris 10 after the 1/06 release and
for Solaris 11, the default is 0x0.
Most users can safely accept the default value, which
enables ACPI if available. If issues related to the use
of ACPI are suspected on releases of Solaris after
Solaris 1/06, it is suggested to first try a value of
0x8 and then, if you do not obtain satisfactory results,
0x02.
#eeprom acpi-user-options=0x8
# init 6
?
If you have booting problems after changes, the following link will
Boot Arguments You Can Specify When Editing the GRUB Menu at Boot Time
-B acpi-user-options=0x2
Disables ACPI entirely.
http://dlc.sun.com/osol/docs/content/SYSADV1/getov.html
Mike
Post by Gernot Wolf
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Regards,
Gernot Wolf
Post by Gernot Wolf
Ooops, something went wrong with my attachement. I'll try again...
Regards,
Gernot Wolf
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 21:45:40 UTC
Permalink
Ok, I could not resist giving it a try, screw my bed ;)

Mike, bingo! That one hit home. With acpi-user-options set to 0x08 and
subsequent reboot cpu load is back to normal (that is load average=0.05).

I'll run my diagnostics again on my system and post the results in case
anyone is interested comparing the numbers. But that will really have to
wait for tomorrow ;)

Big thanks to all of you, you guys have been amazing! Your help is very
much appreciated :)

Regards,
Gernot Wolf
Post by Michael Stapleton
Probably just too big.
Are there any ACPI settings in the BIOS?
or we can try to change ACPI in OI.
#man eeprom
.
.
.
OPERANDS
x86 Only
acpi-user-options
A configuration variable that controls the use of
Advanced Configuration and Power Interface (ACPI), a
power management specification. The acceptable values
for this variable depend on the release of the Solaris
operating system you are using.
For all releases of Solaris 10 and Solaris 11, a value
of of 0x0 means that there will be an attempt to use
ACPI if it is available on the system. A value of 0x2
disables the use of ACPI.
For the Solaris 10 1/06 release, a value of 0x8 means
that there will be an attempt to use ACPI in a mode com-
patible with previous releases of Solaris 10 if it is
available on the system. The default for Solaris 10 1/06
is 0x8.
For releases of Solaris 10 after the 1/06 release and
for Solaris 11, the default is 0x0.
Most users can safely accept the default value, which
enables ACPI if available. If issues related to the use
of ACPI are suspected on releases of Solaris after
Solaris 1/06, it is suggested to first try a value of
0x8 and then, if you do not obtain satisfactory results,
0x02.
#eeprom acpi-user-options=0x8
# init 6
?
If you have booting problems after changes, the following link will
Boot Arguments You Can Specify When Editing the GRUB Menu at Boot Time
-B acpi-user-options=0x2
Disables ACPI entirely.
http://dlc.sun.com/osol/docs/content/SYSADV1/getov.html
Mike
Post by Gernot Wolf
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Regards,
Gernot Wolf
Post by Gernot Wolf
Ooops, something went wrong with my attachement. I'll try again...
Regards,
Gernot Wolf
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-23 16:31:42 UTC
Permalink
Hello everyone,

sorry, I'm two days late, but here, as I promised, are the results of a
rerun of the diagnostic commands I initially run as well as the dtrace
commands you guys send me to troubleshoot my misbehaving system (see
attachements).

These are the results after I did

#eeprom acpi-user-options=0x8

and a subsequent reboot of my OI box. CPU load now is where it should
be, and has remained there through several reboots of the system. The
original problem never showed up again.

I'm posting this just in case anyone is interested in comparing some of
the numbers/infos.

Once again, thanks to all who have contributed their help. I'm still
positively surprised how quickly you guys reacted and solved my problem.
You've been great! :)

Regards,
Gernot Wolf
Josef 'Jeff' Sipek
2011-10-25 23:43:37 UTC
Permalink
I haven't read all of this thread, but it reminded me of this bug:

https://www.illumos.org/issues/1333

Jeff.
Post by Gernot Wolf
Hello everyone,
sorry, I'm two days late, but here, as I promised, are the results
of a rerun of the diagnostic commands I initially run as well as the
dtrace commands you guys send me to troubleshoot my misbehaving
system (see attachements).
These are the results after I did
#eeprom acpi-user-options=0x8
and a subsequent reboot of my OI box. CPU load now is where it
should be, and has remained there through several reboots of the
system. The original problem never showed up again.
I'm posting this just in case anyone is interested in comparing some
of the numbers/infos.
Once again, thanks to all who have contributed their help. I'm still
positively surprised how quickly you guys reacted and solved my
problem. You've been great! :)
Regards,
Gernot Wolf
dtrace: script './dtrace_script' matched 4 probes
^C
CPU ID FUNCTION:NAME
0 2 :END DEVICE TIME (ns)
heci0 27006
i9151 27429
uhci1 52216
hci13940 64814
uhci0 73509
uhci3 80426
ehci0 126846
ehci1 126999
uhci4 129084
uhci2 131898
e1000g0 220297
pci-ide0 2304660
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
idmapd 1
in.routed 1
inetd 1
intrd 1
ipmgmtd 1
miniserv.pl 1
sendmail 1
sshd 1
svc.startd 1
ttymon 1
automountd 2
fmd 2
mdnsd 2
sac 2
gnome-power-mana 4
devfsadm 5
fsflush 9
ntpd 9
smbd 10
gdm-simple-greet 17
Xorg 18
dtrace 20
svc.configd 82
nscd 84
sched 2530
dtrace: description 'syscall:::entry ' matched 234 probes
^C
idmapd 1
inetd 1
nscd 1
svc.configd 1
svc.startd 1
fmd 2
gconfd-2 3
at-spi-registryd 5
mdnsd 6
devfsadm 8
sendmail 10
smbd 10
gnome-power-mana 14
sshd 19
metacity 23
ntpd 45
gdm-simple-greet 116
Xorg 323
dtrace 2633
dtrace: description 'profile-1001 ' matched 1 probe
^C
unix`i86_monitor+0x10
unix`cpu_idle_mwait+0xbe
unix`idle+0x114
unix`thread_start+0x8
1
unix`mutex_enter+0x10
genunix`taskq_thread_wait+0x84
genunix`taskq_thread+0x308
unix`thread_start+0x8
1
unix`mutex_enter+0x10
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`mutex_enter+0x10
genunix`cdev_ioctl+0x45
specfs`spec_ioctl+0x5a
genunix`fop_ioctl+0x7b
genunix`ioctl+0x18e
unix`sys_syscall+0x17a
1
unix`mutex_enter+0x10
unix`page_free+0x10d
unix`page_destroy+0x104
unix`segkmem_free_vn+0xe6
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
unix`kfreea+0x54
unix`i_ddi_mem_free+0x5d
rootnex`rootnex_teardown_copybuf+0x24
rootnex`rootnex_coredma_unbindhdl+0x90
rootnex`rootnex_dma_unbindhdl+0x35
genunix`ddi_dma_unbind_handle+0x41
ata`ghd_dmafree_attr+0x2b
ata`ata_disk_memfree+0x20
gda`gda_free+0x39
dadk`dadk_iodone+0xbf
dadk`dadk_pktcb+0xc6
ata`ata_disk_complete+0x119
ata`ata_hba_complete+0x38
1
i915`i915_gem_retire_requests+0xc
i915`i915_gem_retire_work_handler+0x3d
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
SDC`sysdc_update+0x1f5
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
genunix`fsflush_do_pages+0xb3
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`cpu_idle_exit+0x1fc
unix`cpu_idle_mwait+0xfb
unix`idle+0x114
unix`thread_start+0x8
1
unix`tsc_read+0x5
genunix`gethrtime_unscaled+0xd
genunix`syscall_mstate+0x4a
unix`sys_syscall+0x10e
1
unix`tsc_read+0x5
genunix`gethrtime_unscaled+0xd
genunix`new_mstate+0x4b
unix`trap+0x1fc
unix`0xfffffffffb8001d6
1
unix`page_nextn+0xfe
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`tsc_read+0x9
genunix`gethrtime_unscaled+0xd
genunix`syscall_mstate+0x4a
unix`0xfffffffffb800c86
1
unix`sys_syscall32+0x102
1
genunix`ioctl+0x4a
unix`sys_syscall+0x17a
1
unix`do_splx+0x8d
unix`xc_common+0x231
unix`xc_call+0x46
unix`hat_tlb_inval+0x283
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xed
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
unix`kfreea+0x54
unix`i_ddi_mem_free+0x5d
rootnex`rootnex_teardown_copybuf+0x24
rootnex`rootnex_coredma_unbindhdl+0x90
rootnex`rootnex_dma_unbindhdl+0x35
genunix`ddi_dma_unbind_handle+0x41
ata`ghd_dmafree_attr+0x2b
ata`ata_disk_memfree+0x20
1
unix`page_nextn+0x12
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`page_nextn+0x14
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`callout_downheap+0x7e
genunix`callout_heap_delete+0x18b
genunix`callout_normal+0x27
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`syscall_mstate+0x3e
unix`sys_syscall+0x1a1
1
unix`tsc_gethrtime+0x65
genunix`gethrtime+0xd
genunix`timeout_generic+0x46
genunix`timeout+0x5b
SDC`sysdc_update+0x311
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
unix`page_nextn+0x46
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
sha1`sha1_block_data_order+0xdff
1
unix`ddi_get16+0x10
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
unix`cpu_idle_enter+0x109
unix`cpu_idle_mwait+0xdc
unix`idle+0x114
unix`thread_start+0x8
1
unix`page_lookup_create+0xd0
unix`page_lookup+0x26
genunix`swap_getapage+0xaf
genunix`swap_getpage+0x85
genunix`fop_getpage+0x9a
genunix`anon_zero+0xa3
genunix`segvn_faultpage+0x2a4
genunix`segvn_fault+0xc13
genunix`as_fault+0x5ee
unix`pagefault+0x99
unix`trap+0xe63
unix`0xfffffffffb8001d6
1
genunix`fsflush_do_pages+0x31d
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`cv_unsleep+0x64
genunix`setrun_locked+0x96
genunix`setrun+0x1e
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`tsc_gethrtime+0x8d
genunix`gethrtime+0xd
genunix`lbolt_event_driven+0x18
genunix`ddi_get_lbolt+0xd
unix`setbackdq+0x122
genunix`sleepq_wakeone_chan+0x89
genunix`cv_signal+0x8e
genunix`taskq_dispatch+0x37c
genunix`callout_normal+0x121
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
dtrace`dtrace_dynvar_clean+0x31
dtrace`dtrace_state_clean+0x23
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`pcache_poll+0x480
genunix`poll_common+0x427
genunix`pollsys+0xea
unix`sys_syscall+0x17a
1
genunix`syscall_mstate+0x81
unix`sys_syscall+0x1a1
1
TS`ts_setrun+0x1d
genunix`cv_unsleep+0x8c
genunix`setrun_locked+0x96
genunix`setrun+0x1e
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
rootnex`rootnex_get_sgl+0x1
rootnex`rootnex_dma_bindhdl+0x4c
genunix`ddi_dma_buf_bind_handle+0x117
ata`ghd_dma_buf_bind_attr+0x98
ata`ata_disk_memsetup+0xc3
gda`gda_pktprep+0xab
dadk`dadk_pktprep+0x3a
dadk`dadk_pkt+0x4d
strategy`dmult_enque+0x7d
dadk`dadk_strategy+0x7d
cmdk`cmdkstrategy+0x16c
genunix`bdev_strategy+0x75
genunix`ldi_strategy+0x59
zfs`vdev_disk_io_start+0xd0
zfs`zio_vdev_io_start+0x1ea
zfs`zio_execute+0x8d
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
genunix`fsflush_do_pages+0x358
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`read
1
unix`kcopy+0x1b
genunix`copyin_nowatch+0x48
genunix`copyin_args32+0x3a
genunix`syscall_entry+0xb3
unix`_sys_sysenter_post_swapgs+0x12a
1
unix`cmt_ev_thread_swtch+0x2c
unix`pg_ev_thread_swtch+0x10d
unix`swtch+0xdb
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`cv_waituntil_sig+0x13c
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`sys_syscall+0x17a
1
unix`page_nextn+0x9a
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`cpu_decay+0x27
genunix`cpu_update_pct+0x89
unix`setbackdq+0x2b3
genunix`sleepq_wakeone_chan+0x89
genunix`cv_signal+0x8e
genunix`taskq_dispatch+0x37c
genunix`callout_normal+0x121
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`ddi_io_put8+0xf
ata`ata_disk_start_dma_out+0x88
ata`ata_ctlr_fsm+0x1fb
ata`ata_hba_start+0x84
ata`ghd_waitq_process_and_mutex_hold+0xdf
ata`ghd_intr+0x8d
ata`ata_intr+0x27
unix`av_dispatch_autovect+0x7c
unix`dispatch_hardint+0x33
unix`switch_sp_and_call+0x13
1
unix`ddi_io_put8+0xf
ata`ata_disk_start_common+0xda
ata`ata_disk_start_dma_out+0x32
ata`ata_ctlr_fsm+0x1fb
ata`ata_hba_start+0x84
ata`ghd_waitq_process_and_mutex_hold+0xdf
ata`ghd_intr+0x8d
ata`ata_intr+0x27
unix`av_dispatch_autovect+0x7c
unix`dispatch_hardint+0x33
unix`switch_sp_and_call+0x13
1
unix`rw_exit+0xf
genunix`callout_heap_delete+0x1e3
genunix`callout_realtime+0x1e
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`setbackdq+0x328
genunix`cv_unsleep+0x8c
genunix`setrun_locked+0x96
genunix`setrun+0x1e
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`av_check_softint_pending+0x8
unix`av_dispatch_softvect+0x48
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`thread_lock+0x41
genunix`setrun+0x16
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`splr+0x92
genunix`thread_lock+0x1d
genunix`post_syscall+0x669
unix`0xfffffffffb800c91
1
genunix`fsflush_do_pages+0x395
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`plcnt_inc_dec+0xa3
unix`page_ctr_sub_internal+0x5d
unix`page_ctr_sub+0x78
unix`page_get_mnode_freelist+0x346
unix`page_get_anylist+0x200
unix`page_create_io+0x1e8
unix`page_create_io_wrapper+0x57
unix`segkmem_xalloc+0xc0
unix`segkmem_alloc_io_4G+0x3b
genunix`vmem_xalloc+0x546
genunix`vmem_alloc+0x161
unix`kalloca+0x203
unix`i_ddi_mem_alloc+0x173
rootnex`rootnex_setup_copybuf+0x11f
rootnex`rootnex_bind_slowpath+0x70
rootnex`rootnex_coredma_bindhdl+0x334
rootnex`rootnex_dma_bindhdl+0x4c
genunix`ddi_dma_buf_bind_handle+0x117
ata`ghd_dma_buf_bind_attr+0x98
ata`ata_disk_memsetup+0xc3
1
unix`mul32+0xd
genunix`scalehrtime+0x19
genunix`cpu_update_pct+0x76
unix`setbackdq+0x2b3
genunix`sleepq_wakeone_chan+0x89
genunix`cv_signal+0x8e
genunix`taskq_dispatch+0x37c
genunix`callout_normal+0x121
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`cyclic_timer+0x89
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
2
unix`atomic_swap_64+0x7
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
2
unix`page_nextn
genunix`fsflush+0x39a
unix`thread_start+0x8
2
unix`ddi_io_put16+0x10
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
2
genunix`fsflush_do_pages+0x2fa
genunix`fsflush+0x39a
unix`thread_start+0x8
3
genunix`fsflush_do_pages+0x13e
genunix`fsflush+0x39a
unix`thread_start+0x8
4
unix`dispatch_softint+0x27
unix`switch_sp_and_call+0x13
5
genunix`fsflush_do_pages+0x124
genunix`fsflush+0x39a
unix`thread_start+0x8
7
18
unix`i86_mwait+0xd
unix`cpu_idle_mwait+0xf1
unix`idle+0x114
unix`thread_start+0x8
16862
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
unix`swtch_from_zombie+0x96
genunix`lwp_exit+0x3cd
genunix`syslwp_exit+0x22
unix`_sys_sysenter_post_swapgs+0x149
1
unix`swtch+0x145
genunix`cv_wait+0x61
mac`mac_rx_srs_poll_ring+0xa0
unix`thread_start+0x8
1
unix`swtch+0x145
unix`preempt+0xd7
genunix`post_syscall+0x651
genunix`syscall_exit+0x59
unix`0xfffffffffb800ec9
1
unix`swtch+0x145
unix`preempt+0xd7
unix`trap+0x1503
unix`sys_rtt_common+0x68
unix`_sys_rtt_ints_disabled+0x8
1
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
genunix`taskq_thread_wait+0x74
genunix`taskq_d_thread+0x144
unix`thread_start+0x8
2
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
idm`idm_wd_thread+0x1d7
unix`thread_start+0x8
2
unix`swtch+0x145
unix`preempt+0xd7
genunix`post_syscall+0x651
unix`0xfffffffffb800c91
2
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`lwp_park+0x157
genunix`syslwp_park+0x31
unix`sys_syscall32+0xff
3
unix`swtch+0x145
genunix`cv_wait+0x61
zfs`txg_thread_wait+0x5f
zfs`txg_sync_thread+0x1de
unix`thread_start+0x8
3
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
scsi`scsi_watch_thread+0x330
unix`thread_start+0x8
3
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_timedwait+0x5a
zfs`txg_thread_wait+0x7c
zfs`txg_sync_thread+0x118
unix`thread_start+0x8
3
unix`swtch+0x145
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`cv_waituntil_sig+0x13c
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`_sys_sysenter_post_swapgs+0x149
3
unix`swtch+0x145
genunix`cv_wait+0x61
zfs`txg_thread_wait+0x5f
zfs`txg_quiesce_thread+0x94
unix`thread_start+0x8
6
unix`swtch+0x145
genunix`cv_wait+0x61
genunix`fsflush+0x201
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
smbsrv`smb_thread_continue_timedwait_locked+0x45
smbsrv`smb_thread_continue_timedwait+0x3c
smbsrv`smb_server_timers+0x5d
smbsrv`smb_thread_entry_point+0x69
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
genunix`seg_pasync_thread+0xcb
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_timedwait+0x5a
zfs`l2arc_feed_thread+0xa1
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_timedwait+0x5a
zfs`arc_reclaim_thread+0x13d
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`sigsuspend+0x107
unix`_sys_sysenter_post_swapgs+0x149
9
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`lwp_park+0x157
genunix`syslwp_park+0x31
unix`_sys_sysenter_post_swapgs+0x149
10
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`nanosleep+0x120
unix`_sys_sysenter_post_swapgs+0x149
10
unix`swtch+0x145
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`cv_waituntil_sig+0x13c
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`sys_syscall+0x17a
18
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`lwp_park+0x157
genunix`syslwp_park+0x31
unix`sys_syscall+0x17a
19
unix`swtch_to+0xe6
unix`idle+0xb8
unix`thread_start+0x8
24
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`_sys_sysenter_post_swapgs+0x149
25
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
sata`sata_event_daemon+0xfe
unix`thread_start+0x8
182
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
stmf`stmf_svc_timeout+0x23a
stmf`stmf_svc+0x129
genunix`taskq_thread+0x285
unix`thread_start+0x8
453
unix`swtch+0x145
genunix`cv_wait+0x61
genunix`taskq_thread_wait+0x84
genunix`taskq_thread+0x308
unix`thread_start+0x8
690
unix`swtch+0x145
unix`idle+0xc4
unix`thread_start+0x8
1064
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 0 0,0 3 0,0
ehci#0 | 3 0,0 0 0,0
ehci#1 | 0 0,0 3 0,0
hci1394#0 | 1 0,0 0 0,0
heci#0 | 1 0,0 0 0,0
i915#1 | 1 0,0 0 0,0
pci-ide#0 | 0 0,0 2 0,0
uhci#0 | 1 0,0 0 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 0 0,0 3 0,0
uhci#3 | 1 0,0 0 0,0
uhci#4 | 3 0,0 0 0,0
tty cmdk0 sd0 sd1 sd2 cpu
tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id
0 0 2 0 15 164 2 11 169 2 10 164 2 11 0 1 0 99
0 47 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
Profiling interrupt: 5840 events in 30.107 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
-------------------------------------------------------------------------------
5795 99% 99% 0.00 1464 cpu[0] i86_mwait
23 0% 100% 0.00 1754 cpu[1] fsflush_do_pages
9 0% 100% 0.00 3766 cpu[0] (usermode)
7 0% 100% 0.00 1619 cpu[1] page_nextn
1 0% 100% 0.00 2717 cpu[1] pcacheset_resolve
1 0% 100% 0.00 2365 cpu[0]+11 restore_mstate
1 0% 100% 0.00 4830 cpu[1] page_unlock
1 0% 100% 0.00 1157 cpu[1] do_splx
1 0% 100% 0.00 3023 cpu[1] htable_release
1 0% 100% 0.00 2268 cpu[1] hment_compare
-------------------------------------------------------------------------------
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 2 0 2 389 178 462 6 21 11 0 74 0 1 0 99
1 1 0 2 225 139 417 4 21 15 0 64 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 309 106 168 0 8 4 0 102 0 0 0 100
1 0 0 1 103 50 121 0 8 6 0 44 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 310 106 134 0 6 1 0 85 0 0 0 100
1 0 0 2 138 62 158 0 5 0 0 49 0 0 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 307 106 135 0 7 1 0 79 0 0 0 100
1 0 0 0 138 61 151 0 8 1 0 27 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 1 0 0 307 106 144 0 5 1 0 109 0 0 0 100
1 0 0 0 128 59 145 0 6 2 0 36 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 306 105 137 0 4 1 0 79 0 0 0 100
1 0 0 0 130 63 141 0 4 0 0 23 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 306 105 144 0 6 3 0 82 0 0 0 100
1 0 0 0 134 61 162 0 6 0 0 71 0 0 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 308 106 164 0 5 1 0 75 0 0 0 100
1 0 0 1 101 43 106 0 5 0 0 29 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 309 107 133 0 8 1 0 81 0 0 0 100
1 0 0 0 139 55 149 0 8 1 0 45 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 311 107 149 0 6 1 0 69 0 0 0 100
1 0 0 1 116 51 125 0 6 0 0 40 0 0 0 100
OpenIndiana PowerTOP version 1.2 (C) 2009 Intel Corporation
Collecting data for 30.00 second(s)
C-states (idle power) Avg Residency
C0 (cpu running) (0.0%)
C1 4.8ms (100.0%)
P-states (frequencies)
1200 Mhz 88.9%
1800 Mhz 11.1%
Wakeups-from-idle per second: 484.2 interval: 30.0s
37.9% (183.6) sched :<xcalls> unix`dtrace_xcall_func
20.7% (100.0) <kernel> :genunix`clock
16.6% ( 80.2) <kernel> :genunix`cv_wakeup
10.3% ( 50.0) <kernel> :SDC`sysdc_update
4.0% ( 19.3) <kernel> :uhci`uhci_handle_root_hub_status_change
2.1% ( 10.0) <kernel> :ata`ghd_timeout
1.6% ( 7.7) <kernel> :ehci`ehci_handle_root_hub_status_change
1.0% ( 5.0) <kernel> :uhci`uhci_cmd_timeout_hdlr
0.9% ( 4.3) <interrupt> :pci-ide#0
0.8% ( 4.0) <kernel> :genunix`schedpaging
0.4% ( 2.0) <kernel> :cpudrv`cpudrv_monitor_disp
0.3% ( 1.7) <kernel> :i915`i915_gem_retire_work_handler
0.3% ( 1.7) <interrupt> :e1000g#0
0.3% ( 1.6) intrd :<xcalls> unix`hati_demap_func
0.2% ( 1.0) <kernel> :TS`ts_update
0.2% ( 1.0) <kernel> :e1000g`e1000g_local_timer
0.2% ( 1.0) <kernel> :genunix`clock_realtime_fire
0.2% ( 1.0) <interrupt> :uhci#0
0.2% ( 1.0) <interrupt> :uhci#1
0.2% ( 1.0) <interrupt> :ehci#0
0.2% ( 1.0) <interrupt> :uhci#3
0.2% ( 1.0) <interrupt> :uhci#2
0.2% ( 1.0) <interrupt> :ehci#1
0.2% ( 1.0) <interrupt> :uhci#4
0.2% ( 0.7) sched :<xcalls> unix`hati_demap_func
0.1% ( 0.6) sched :<xcalls> unix`speedstep_pstate_transition
0.1% ( 0.5) <kernel> :heci`heci_wd_timer
0.1% ( 0.3) <kernel> :kcf`rnd_handler
0.0% ( 0.2) <kernel> :ahci`ahci_watchdog_handler
0.0% ( 0.2) zpool-rpool :<xcalls> unix`speedstep_pstate_transition
0.0% ( 0.1) <kernel> :swrand`rnd_handler
0.0% ( 0.1) <kernel> :ip`igmp_slowtimo
0.0% ( 0.1) <kernel> :ip`squeue_fire
0.0% ( 0.1) <kernel> :genunix`vmem_update
0.0% ( 0.1) smbd :<xcalls> unix`speedstep_pstate_transition
0.0% ( 0.1) <kernel> :ip`tcp_timer_callback
0.0% ( 0.1) <kernel> :genunix`kmem_update
0.0% ( 0.0) <kernel> :genunix`realitexpire
powertop: battery kstat not found (-1)
no ACPI power usage estimate available
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1777 gdm 187M 35M sleep 59 0 0:00:16 0,1% gdm-simple-gree/1
1909 root 4568K 3364K cpu0 59 0 0:00:00 0,0% prstat/1
1713 root 395M 84M sleep 59 0 0:00:10 0,0% Xorg/1
389 root 6768K 4280K sleep 59 0 0:00:05 0,0% nscd/30
721 root 1984K 1120K sleep 59 0 0:00:00 0,0% smbiod-svc/2
738 root 14M 9228K sleep 59 0 0:00:13 0,0% smbd/16
692 root 19M 12M sleep 59 0 0:00:05 0,0% fmd/27
678 root 4168K 2320K sleep 59 0 0:00:00 0,0% rmvolmgr/1
644 daemon 5452K 3284K sleep 59 0 0:00:01 0,0% idmapd/5
626 root 1620K 980K sleep 59 0 0:00:00 0,0% utmpd/1
622 root 4600K 3484K sleep 59 0 0:00:01 0,0% inetd/4
693 root 4216K 2188K sleep 59 0 0:00:00 0,0% syslogd/11
635 root 2880K 1640K sleep 59 0 0:00:00 0,0% automountd/2
636 root 2952K 1548K sleep 59 0 0:00:00 0,0% automountd/4
612 root 2164K 1352K sleep 59 0 0:00:00 0,0% sac/1
615 root 1988K 1440K sleep 59 0 0:00:00 0,0% ttymon/1
1776 gdm 113M 13M sleep 59 0 0:00:00 0,0% metacity/1
603 daemon 3204K 1304K sleep 59 0 0:00:00 0,0% rpcbind/1
599 root 4400K 3348K sleep 59 0 0:00:00 0,0% console-kit-dae/2
1769 gdm 7772K 5872K sleep 59 0 0:00:00 0,0% at-spi-registry/1
698 root 4388K 1920K sleep 59 0 0:00:00 0,0% sshd/1
796 root 7252K 3660K sleep 59 0 0:00:02 0,0% miniserv.pl/1
700 noaccess 2520K 1624K sleep 59 0 0:00:01 0,0% mdnsd/1
1863 gernot 7748K 4852K sleep 59 0 0:00:00 0,0% sshd/1
235 root 2164K 1612K sleep 59 0 0:00:00 0,0% powerd/4
561 root 2728K 1692K sleep 59 0 0:00:03 0,0% in.routed/1
330 root 3408K 2092K sleep 59 0 0:00:00 0,0% dbus-daemon/1
764 root 6012K 1492K sleep 59 0 0:00:00 0,0% cnid_metad/1
555 root 3496K 2128K sleep 59 0 0:00:03 0,0% hald-addon-acpi/1
502 root 2388K 1184K sleep 59 0 0:00:00 0,0% in.ndpd/1
715 root 2164K 1344K sleep 59 0 0:00:00 0,0% dns-sd/1
1755 gdm 15M 10M sleep 59 0 0:00:00 0,0% gnome-session/2
249 root 4616K 3308K sleep 59 0 0:00:03 0,0% devfsadm/6
316 root 1980K 956K sleep 59 0 0:00:00 0,0% iscsid/2
551 root 4292K 2736K sleep 59 0 0:00:00 0,0% hald-addon-cpuf/1
714 root 2164K 1344K sleep 59 0 0:00:00 0,0% dns-sd/1
766 root 8008K 3144K sleep 59 0 0:00:00 0,0% afpd/1
563 root 2832K 2076K sleep 59 0 0:00:00 0,0% hald-addon-stor/3
459 root 3784K 2308K sleep 59 0 0:00:00 0,0% hald-runner/1
456 root 7436K 6080K sleep 59 0 0:00:04 0,0% hald/4
543 root 3896K 2340K sleep 59 0 0:00:00 0,0% hald-addon-netw/1
1712 root 6196K 4344K sleep 59 0 0:00:00 0,0% gdm-simple-slav/2
168 root 6020K 3184K sleep 59 0 0:00:00 0,0% syseventd/18
1779 root 4236K 3100K sleep 59 0 0:00:00 0,0% gdm-session-wor/1
684 root 2340K 1456K sleep 59 0 0:00:00 0,0% ttymon/1
346 root 6264K 3592K sleep 59 0 0:00:11 0,0% ntpd/1
252 root 3800K 2784K sleep 59 0 0:00:00 0,0% picld/4
124 root 0K 0K sleep 99 -20 0:13:39 0,0% zpool-tank/138
46 netcfg 3440K 2540K sleep 59 0 0:00:00 0,0% netcfgd/3
178 root 2484K 1500K sleep 60 -20 0:00:00 0,0% zonestatd/5
579 root 2256K 1392K sleep 59 0 0:00:00 0,0% cron/1
777 root 6668K 5564K sleep 59 0 0:00:16 0,0% intrd/1
49 netadm 3860K 2744K sleep 59 0 0:00:01 0,0% ipmgmtd/4
47 root 2980K 2024K sleep 59 0 0:00:00 0,0% dlmgmtd/6
12 root 14M 13M sleep 59 0 0:00:19 0,0% svc.configd/17
10 root 13M 11M sleep 59 0 0:00:05 0,0% svc.startd/13
143 root 2548K 1628K sleep 59 0 0:00:00 0,0% pfexecd/3
1 root 2720K 1852K sleep 59 0 0:00:00 0,0% init/1
6 root 0K 0K sleep 99 -20 0:00:02 0,0% zpool-rpool/138
Total: 77 processes, 528 lwps, load averages: 0,01, 0,10, 0,11
last pid: 1917; load avg: 0.01, 0.06, 0.10; up 2+18:06:01 17:59:17
79 processes: 78 sleeping, 1 on cpu
CPU states: 99.7% idle, 0.0% user, 0.3% kernel, 0.0% iowait, 0.0% swap
Kernel: 300 ctxsw, 1 trap, 457 intr, 122 syscall
Memory: 8118M phys mem, 1110M free mem, 4058M total swap, 4058M free swap
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
3 root 1 60 -20 0K 0K sleep 12:05 0.14% fsflush
1777 gdm 1 59 0 187M 35M sleep 0:16 0.06% gdm-simple-gree
561 root 1 59 0 2728K 1692K sleep 0:03 0.04% in.routed
1916 root 1 59 0 3884K 2208K cpu/0 0:00 0.03% top
1713 root 1 59 0 395M 84M sleep 0:10 0.03% Xorg
346 root 1 59 0 6264K 3592K sleep 0:11 0.00% ntpd
1778 gdm 1 59 0 172M 18M sleep 0:00 0.00% gnome-power-man
1863 gernot 1 59 0 7788K 4868K sleep 0:00 0.00% sshd
738 root 16 59 0 14M 9228K sleep 0:13 0.00% smbd
10 root 13 59 0 13M 11M sleep 0:05 0.00% svc.startd
622 root 4 59 0 4600K 3484K sleep 0:01 0.00% inetd
692 root 26 59 0 19M 12M sleep 0:05 0.00% fmd
747 root 1 59 0 6020K 2168K sleep 0:02 0.00% sendmail
249 root 6 59 0 4616K 3308K sleep 0:03 0.00% devfsadm
700 noaccess 1 59 0 2520K 1624K sleep 0:01 0.00% mdnsd
124 root 138 99 -20 0K 0K sleep 13:39 0.00% zpool-tank
12 root 17 59 0 14M 13M sleep 0:19 0.00% svc.configd
777 root 1 59 0 6668K 5564K sleep 0:16 0.00% intrd
389 root 30 59 0 6768K 4280K sleep 0:05 0.00% nscd
456 root 4 59 0 7436K 6080K sleep 0:04 0.00% hald
555 root 1 59 0 3496K 2128K sleep 0:03 0.00% hald-addon-acpi
6 root 138 99 -20 0K 0K sleep 0:02 0.00% zpool-rpool
796 root 1 59 0 7252K 3660K sleep 0:02 0.00% miniserv.pl
644 daemon 5 59 0 5452K 3284K sleep 0:01 0.00% idmapd
49 netadm 4 59 0 3860K 2744K sleep 0:01 0.00% ipmgmtd
2 root 2 98 -20 0K 0K sleep 0:00 0.00% pageout
1859 root 1 60 0 9796K 2384K sleep 0:00 0.00% afpd
178 root 5 60 -20 2484K 1500K sleep 0:00 0.00% zonestatd
4 root 3 60 -20 0K 0K sleep 0:00 0.00% kcfpoold
1771 gdm 1 59 0 202M 49M sleep 0:00 0.00% gnome-settings-
1776 gdm 1 59 0 113M 13M sleep 0:00 0.00% metacity
1755 gdm 2 59 0 15M 10M sleep 0:00 0.00% gnome-session
1768 gdm 1 59 0 8196K 6992K sleep 0:00 0.00% gconfd-2
1769 gdm 1 59 0 7772K 5872K sleep 0:00 0.00% at-spi-registry
1856 root 1 59 0 19M 5464K sleep 0:00 0.00% cnid_dbd
1900 root 1 59 0 19M 5156K sleep 0:00 0.00% cnid_dbd
1773 gdm 2 59 0 7164K 5152K sleep 0:00 0.00% bonobo-activati
1712 root 2 59 0 6196K 4344K sleep 0:00 0.00% gdm-simple-slav
811 root 2 59 0 4684K 3524K sleep 0:00 0.00% gdm-binary
599 root 2 59 0 4400K 3348K sleep 0:00 0.00% console-kit-dae
1862 root 1 59 0 5804K 3192K sleep 0:00 0.00% sshd
168 root 17 59 0 6020K 3184K sleep 0:00 0.00% syseventd
766 root 1 59 0 8008K 3144K sleep 0:00 0.00% afpd
1779 root 1 59 0 4236K 3100K sleep 0:00 0.00% gdm-session-wor
1775 gdm 1 59 0 4116K 2968K sleep 0:00 0.00% gvfsd
252 root 4 59 0 3800K 2784K sleep 0:00 0.00% picld
551 root 1 59 0 4292K 2736K sleep 0:00 0.00% hald-addon-cpuf
46 netcfg 3 59 0 3440K 2540K sleep 0:00 0.00% netcfgd
1878 root 1 59 0 3816K 2520K sleep 0:00 0.00% bash
1866 gernot 1 59 0 3816K 2516K sleep 0:00 0.00% bash
543 root 1 59 0 3896K 2340K sleep 0:00 0.00% hald-addon-netw
678 root 1 59 0 4168K 2320K sleep 0:00 0.00% rmvolmgr
459 root 1 59 0 3784K 2308K sleep 0:00 0.00% hald-runner
693 root 11 59 0 4216K 2188K sleep 0:00 0.00% syslogd
330 root 1 59 0 3408K 2092K sleep 0:00 0.00% dbus-daemon
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 5195360 2094636 1 4 0 0 0 0 1 0 2 2 2 616 140 882 0 1 99
0 0 0 4236680 1137564 4 11 0 0 0 0 0 0 0 0 0 434 183 323 0 0 100
0 0 0 4236600 1137512 0 0 0 0 0 0 0 0 0 0 0 433 120 289 0 0 100
0 0 0 4236600 1137512 0 0 0 0 0 0 0 0 0 0 0 434 106 288 0 0 100
0 0 0 4236600 1137512 0 0 0 0 0 0 0 0 0 0 0 434 99 284 0 0 100
0 0 0 4236600 1137528 0 0 0 0 0 0 0 0 0 0 0 417 109 280 0 0 100
0 0 0 4236600 1137560 0 0 0 0 0 0 0 0 0 0 0 432 149 306 0 0 100
0 0 0 4236600 1137560 0 0 0 0 0 0 0 0 0 0 0 451 124 305 0 0 100
0 0 0 4236600 1137560 0 0 0 0 0 0 0 0 0 0 0 453 104 294 0 0 100
0 0 0 4236584 1137544 0 1 0 0 0 0 0 0 0 0 0 439 140 298 0 0 100
interrupt total rate
--------------------------------
clock 23805444 100
audiohd 0 0
ecppc0 0 0
--------------------------------
Total 23805444 100
0 swap ins
0 swap outs
0 pages swapped in
0 pages swapped out
836130 total address trans. faults taken
3 page ins
0 page outs
3 pages paged in
0 pages paged out
168367 total reclaims
168367 reclaims from free list
0 micro (hat) faults
836130 minor (as) faults
3 major faults
137112 copy-on-write faults
384370 zero fill page faults
145404 pages examined by the clock daemon
0 revolutions of the clock hand
0 pages freed by the clock daemon
1482 forks
435 vforks
1365 execs
209981025 cpu context switches
146541748 device interrupts
1697366 traps
33376874 system calls
3003291 total name lookups (cache hits 92%)
48690 user cpu
430826 system cpu
47131950 idle cpu
0 wait cpu
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
--
Once you have their hardware. Never give it back.
(The First Rule of Hardware Acquisition)
Gernot Wolf
2011-10-26 09:05:47 UTC
Permalink
Yep, I found that one too when I googled the symptoms of my box. There
may be some relation between the symptoms, on the other hand there are
also obvious differences: My system didn't start with almost no cpu load
that slowly increased until the system crashes, but showing over 50% cpu
load immediately after reboot and staying there (when idle). Contrary to
the symptoms desribed in this bug report, my system remained stable
(absolutely rock-stable, to be precise :)) over time.

So my concern wasn't that my system didn't function properly, but the
increased power usage (waste of energy) and the constant (surely
unhealthy) stress on the hardware (mainly my cpu, poor thing was working
itself to death ;)).

Of course there could be a connection between the root of my problems
and that described in this bug report, but I'm WAY out of my league
here. The only thing I noticed is that in both cases acpi seems to be
involved one way or the other.

My problems went away by setting acpi-user-options to 0x8. Of course we
still don't know what exactly was going wrong in my system. If someone
still wants to get to the heart of the matter, I'll happily run whatever
tests, diagnostic commands or scripts anyone will send me :)

Regards,
Gernot Wolf
Post by Josef 'Jeff' Sipek
https://www.illumos.org/issues/1333
Jeff.
Post by Gernot Wolf
Hello everyone,
sorry, I'm two days late, but here, as I promised, are the results
of a rerun of the diagnostic commands I initially run as well as the
dtrace commands you guys send me to troubleshoot my misbehaving
system (see attachements).
These are the results after I did
#eeprom acpi-user-options=0x8
and a subsequent reboot of my OI box. CPU load now is where it
should be, and has remained there through several reboots of the
system. The original problem never showed up again.
I'm posting this just in case anyone is interested in comparing some
of the numbers/infos.
Once again, thanks to all who have contributed their help. I'm still
positively surprised how quickly you guys reacted and solved my
problem. You've been great! :)
Regards,
Gernot Wolf
dtrace: script './dtrace_script' matched 4 probes
^C
CPU ID FUNCTION:NAME
0 2 :END DEVICE TIME (ns)
heci0 27006
i9151 27429
uhci1 52216
hci13940 64814
uhci0 73509
uhci3 80426
ehci0 126846
ehci1 126999
uhci4 129084
uhci2 131898
e1000g0 220297
pci-ide0 2304660
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
idmapd 1
in.routed 1
inetd 1
intrd 1
ipmgmtd 1
miniserv.pl 1
sendmail 1
sshd 1
svc.startd 1
ttymon 1
automountd 2
fmd 2
mdnsd 2
sac 2
gnome-power-mana 4
devfsadm 5
fsflush 9
ntpd 9
smbd 10
gdm-simple-greet 17
Xorg 18
dtrace 20
svc.configd 82
nscd 84
sched 2530
dtrace: description 'syscall:::entry ' matched 234 probes
^C
idmapd 1
inetd 1
nscd 1
svc.configd 1
svc.startd 1
fmd 2
gconfd-2 3
at-spi-registryd 5
mdnsd 6
devfsadm 8
sendmail 10
smbd 10
gnome-power-mana 14
sshd 19
metacity 23
ntpd 45
gdm-simple-greet 116
Xorg 323
dtrace 2633
dtrace: description 'profile-1001 ' matched 1 probe
^C
unix`i86_monitor+0x10
unix`cpu_idle_mwait+0xbe
unix`idle+0x114
unix`thread_start+0x8
1
unix`mutex_enter+0x10
genunix`taskq_thread_wait+0x84
genunix`taskq_thread+0x308
unix`thread_start+0x8
1
unix`mutex_enter+0x10
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`mutex_enter+0x10
genunix`cdev_ioctl+0x45
specfs`spec_ioctl+0x5a
genunix`fop_ioctl+0x7b
genunix`ioctl+0x18e
unix`sys_syscall+0x17a
1
unix`mutex_enter+0x10
unix`page_free+0x10d
unix`page_destroy+0x104
unix`segkmem_free_vn+0xe6
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
unix`kfreea+0x54
unix`i_ddi_mem_free+0x5d
rootnex`rootnex_teardown_copybuf+0x24
rootnex`rootnex_coredma_unbindhdl+0x90
rootnex`rootnex_dma_unbindhdl+0x35
genunix`ddi_dma_unbind_handle+0x41
ata`ghd_dmafree_attr+0x2b
ata`ata_disk_memfree+0x20
gda`gda_free+0x39
dadk`dadk_iodone+0xbf
dadk`dadk_pktcb+0xc6
ata`ata_disk_complete+0x119
ata`ata_hba_complete+0x38
1
i915`i915_gem_retire_requests+0xc
i915`i915_gem_retire_work_handler+0x3d
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
SDC`sysdc_update+0x1f5
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
genunix`fsflush_do_pages+0xb3
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`cpu_idle_exit+0x1fc
unix`cpu_idle_mwait+0xfb
unix`idle+0x114
unix`thread_start+0x8
1
unix`tsc_read+0x5
genunix`gethrtime_unscaled+0xd
genunix`syscall_mstate+0x4a
unix`sys_syscall+0x10e
1
unix`tsc_read+0x5
genunix`gethrtime_unscaled+0xd
genunix`new_mstate+0x4b
unix`trap+0x1fc
unix`0xfffffffffb8001d6
1
unix`page_nextn+0xfe
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`tsc_read+0x9
genunix`gethrtime_unscaled+0xd
genunix`syscall_mstate+0x4a
unix`0xfffffffffb800c86
1
unix`sys_syscall32+0x102
1
genunix`ioctl+0x4a
unix`sys_syscall+0x17a
1
unix`do_splx+0x8d
unix`xc_common+0x231
unix`xc_call+0x46
unix`hat_tlb_inval+0x283
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xed
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
unix`kfreea+0x54
unix`i_ddi_mem_free+0x5d
rootnex`rootnex_teardown_copybuf+0x24
rootnex`rootnex_coredma_unbindhdl+0x90
rootnex`rootnex_dma_unbindhdl+0x35
genunix`ddi_dma_unbind_handle+0x41
ata`ghd_dmafree_attr+0x2b
ata`ata_disk_memfree+0x20
1
unix`page_nextn+0x12
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`page_nextn+0x14
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`callout_downheap+0x7e
genunix`callout_heap_delete+0x18b
genunix`callout_normal+0x27
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`syscall_mstate+0x3e
unix`sys_syscall+0x1a1
1
unix`tsc_gethrtime+0x65
genunix`gethrtime+0xd
genunix`timeout_generic+0x46
genunix`timeout+0x5b
SDC`sysdc_update+0x311
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
unix`page_nextn+0x46
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
sha1`sha1_block_data_order+0xdff
1
unix`ddi_get16+0x10
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
unix`cpu_idle_enter+0x109
unix`cpu_idle_mwait+0xdc
unix`idle+0x114
unix`thread_start+0x8
1
unix`page_lookup_create+0xd0
unix`page_lookup+0x26
genunix`swap_getapage+0xaf
genunix`swap_getpage+0x85
genunix`fop_getpage+0x9a
genunix`anon_zero+0xa3
genunix`segvn_faultpage+0x2a4
genunix`segvn_fault+0xc13
genunix`as_fault+0x5ee
unix`pagefault+0x99
unix`trap+0xe63
unix`0xfffffffffb8001d6
1
genunix`fsflush_do_pages+0x31d
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`cv_unsleep+0x64
genunix`setrun_locked+0x96
genunix`setrun+0x1e
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`tsc_gethrtime+0x8d
genunix`gethrtime+0xd
genunix`lbolt_event_driven+0x18
genunix`ddi_get_lbolt+0xd
unix`setbackdq+0x122
genunix`sleepq_wakeone_chan+0x89
genunix`cv_signal+0x8e
genunix`taskq_dispatch+0x37c
genunix`callout_normal+0x121
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
dtrace`dtrace_dynvar_clean+0x31
dtrace`dtrace_state_clean+0x23
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`pcache_poll+0x480
genunix`poll_common+0x427
genunix`pollsys+0xea
unix`sys_syscall+0x17a
1
genunix`syscall_mstate+0x81
unix`sys_syscall+0x1a1
1
TS`ts_setrun+0x1d
genunix`cv_unsleep+0x8c
genunix`setrun_locked+0x96
genunix`setrun+0x1e
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
rootnex`rootnex_get_sgl+0x1
rootnex`rootnex_dma_bindhdl+0x4c
genunix`ddi_dma_buf_bind_handle+0x117
ata`ghd_dma_buf_bind_attr+0x98
ata`ata_disk_memsetup+0xc3
gda`gda_pktprep+0xab
dadk`dadk_pktprep+0x3a
dadk`dadk_pkt+0x4d
strategy`dmult_enque+0x7d
dadk`dadk_strategy+0x7d
cmdk`cmdkstrategy+0x16c
genunix`bdev_strategy+0x75
genunix`ldi_strategy+0x59
zfs`vdev_disk_io_start+0xd0
zfs`zio_vdev_io_start+0x1ea
zfs`zio_execute+0x8d
genunix`taskq_thread+0x285
unix`thread_start+0x8
1
genunix`fsflush_do_pages+0x358
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`read
1
unix`kcopy+0x1b
genunix`copyin_nowatch+0x48
genunix`copyin_args32+0x3a
genunix`syscall_entry+0xb3
unix`_sys_sysenter_post_swapgs+0x12a
1
unix`cmt_ev_thread_swtch+0x2c
unix`pg_ev_thread_swtch+0x10d
unix`swtch+0xdb
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`cv_waituntil_sig+0x13c
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`sys_syscall+0x17a
1
unix`page_nextn+0x9a
genunix`fsflush_do_pages+0x104
genunix`fsflush+0x39a
unix`thread_start+0x8
1
genunix`cpu_decay+0x27
genunix`cpu_update_pct+0x89
unix`setbackdq+0x2b3
genunix`sleepq_wakeone_chan+0x89
genunix`cv_signal+0x8e
genunix`taskq_dispatch+0x37c
genunix`callout_normal+0x121
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`ddi_io_put8+0xf
ata`ata_disk_start_dma_out+0x88
ata`ata_ctlr_fsm+0x1fb
ata`ata_hba_start+0x84
ata`ghd_waitq_process_and_mutex_hold+0xdf
ata`ghd_intr+0x8d
ata`ata_intr+0x27
unix`av_dispatch_autovect+0x7c
unix`dispatch_hardint+0x33
unix`switch_sp_and_call+0x13
1
unix`ddi_io_put8+0xf
ata`ata_disk_start_common+0xda
ata`ata_disk_start_dma_out+0x32
ata`ata_ctlr_fsm+0x1fb
ata`ata_hba_start+0x84
ata`ghd_waitq_process_and_mutex_hold+0xdf
ata`ghd_intr+0x8d
ata`ata_intr+0x27
unix`av_dispatch_autovect+0x7c
unix`dispatch_hardint+0x33
unix`switch_sp_and_call+0x13
1
unix`rw_exit+0xf
genunix`callout_heap_delete+0x1e3
genunix`callout_realtime+0x1e
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`setbackdq+0x328
genunix`cv_unsleep+0x8c
genunix`setrun_locked+0x96
genunix`setrun+0x1e
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`av_check_softint_pending+0x8
unix`av_dispatch_softvect+0x48
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`thread_lock+0x41
genunix`setrun+0x16
genunix`cv_wakeup+0x36
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_realtime+0x26
genunix`cyclic_softint+0xdc
unix`cbe_low_level+0x17
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
unix`splr+0x92
genunix`thread_lock+0x1d
genunix`post_syscall+0x669
unix`0xfffffffffb800c91
1
genunix`fsflush_do_pages+0x395
genunix`fsflush+0x39a
unix`thread_start+0x8
1
unix`plcnt_inc_dec+0xa3
unix`page_ctr_sub_internal+0x5d
unix`page_ctr_sub+0x78
unix`page_get_mnode_freelist+0x346
unix`page_get_anylist+0x200
unix`page_create_io+0x1e8
unix`page_create_io_wrapper+0x57
unix`segkmem_xalloc+0xc0
unix`segkmem_alloc_io_4G+0x3b
genunix`vmem_xalloc+0x546
genunix`vmem_alloc+0x161
unix`kalloca+0x203
unix`i_ddi_mem_alloc+0x173
rootnex`rootnex_setup_copybuf+0x11f
rootnex`rootnex_bind_slowpath+0x70
rootnex`rootnex_coredma_bindhdl+0x334
rootnex`rootnex_dma_bindhdl+0x4c
genunix`ddi_dma_buf_bind_handle+0x117
ata`ghd_dma_buf_bind_attr+0x98
ata`ata_disk_memsetup+0xc3
1
unix`mul32+0xd
genunix`scalehrtime+0x19
genunix`cpu_update_pct+0x76
unix`setbackdq+0x2b3
genunix`sleepq_wakeone_chan+0x89
genunix`cv_signal+0x8e
genunix`taskq_dispatch+0x37c
genunix`callout_normal+0x121
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
1
genunix`cyclic_timer+0x89
genunix`cyclic_softint+0xdc
unix`cbe_softclock+0x1a
unix`av_dispatch_softvect+0x5f
unix`dispatch_softint+0x34
unix`switch_sp_and_call+0x13
2
unix`atomic_swap_64+0x7
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
2
unix`page_nextn
genunix`fsflush+0x39a
unix`thread_start+0x8
2
unix`ddi_io_put16+0x10
genunix`callout_list_expire+0x77
genunix`callout_expire+0x31
genunix`callout_execute+0x1e
genunix`taskq_thread+0x285
unix`thread_start+0x8
2
genunix`fsflush_do_pages+0x2fa
genunix`fsflush+0x39a
unix`thread_start+0x8
3
genunix`fsflush_do_pages+0x13e
genunix`fsflush+0x39a
unix`thread_start+0x8
4
unix`dispatch_softint+0x27
unix`switch_sp_and_call+0x13
5
genunix`fsflush_do_pages+0x124
genunix`fsflush+0x39a
unix`thread_start+0x8
7
18
unix`i86_mwait+0xd
unix`cpu_idle_mwait+0xf1
unix`idle+0x114
unix`thread_start+0x8
16862
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
unix`swtch_from_zombie+0x96
genunix`lwp_exit+0x3cd
genunix`syslwp_exit+0x22
unix`_sys_sysenter_post_swapgs+0x149
1
unix`swtch+0x145
genunix`cv_wait+0x61
mac`mac_rx_srs_poll_ring+0xa0
unix`thread_start+0x8
1
unix`swtch+0x145
unix`preempt+0xd7
genunix`post_syscall+0x651
genunix`syscall_exit+0x59
unix`0xfffffffffb800ec9
1
unix`swtch+0x145
unix`preempt+0xd7
unix`trap+0x1503
unix`sys_rtt_common+0x68
unix`_sys_rtt_ints_disabled+0x8
1
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
genunix`taskq_thread_wait+0x74
genunix`taskq_d_thread+0x144
unix`thread_start+0x8
2
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
idm`idm_wd_thread+0x1d7
unix`thread_start+0x8
2
unix`swtch+0x145
unix`preempt+0xd7
genunix`post_syscall+0x651
unix`0xfffffffffb800c91
2
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`lwp_park+0x157
genunix`syslwp_park+0x31
unix`sys_syscall32+0xff
3
unix`swtch+0x145
genunix`cv_wait+0x61
zfs`txg_thread_wait+0x5f
zfs`txg_sync_thread+0x1de
unix`thread_start+0x8
3
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
scsi`scsi_watch_thread+0x330
unix`thread_start+0x8
3
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_timedwait+0x5a
zfs`txg_thread_wait+0x7c
zfs`txg_sync_thread+0x118
unix`thread_start+0x8
3
unix`swtch+0x145
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`cv_waituntil_sig+0x13c
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`_sys_sysenter_post_swapgs+0x149
3
unix`swtch+0x145
genunix`cv_wait+0x61
zfs`txg_thread_wait+0x5f
zfs`txg_quiesce_thread+0x94
unix`thread_start+0x8
6
unix`swtch+0x145
genunix`cv_wait+0x61
genunix`fsflush+0x201
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
smbsrv`smb_thread_continue_timedwait_locked+0x45
smbsrv`smb_thread_continue_timedwait+0x3c
smbsrv`smb_server_timers+0x5d
smbsrv`smb_thread_entry_point+0x69
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
genunix`seg_pasync_thread+0xcb
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_timedwait+0x5a
zfs`l2arc_feed_thread+0xa1
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_timedwait+0x5a
zfs`arc_reclaim_thread+0x13d
unix`thread_start+0x8
9
unix`swtch+0x145
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`sigsuspend+0x107
unix`_sys_sysenter_post_swapgs+0x149
9
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`lwp_park+0x157
genunix`syslwp_park+0x31
unix`_sys_sysenter_post_swapgs+0x149
10
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`nanosleep+0x120
unix`_sys_sysenter_post_swapgs+0x149
10
unix`swtch+0x145
genunix`cv_wait_sig_swap_core+0x174
genunix`cv_wait_sig_swap+0x18
genunix`cv_waituntil_sig+0x13c
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`sys_syscall+0x17a
18
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`lwp_park+0x157
genunix`syslwp_park+0x31
unix`sys_syscall+0x17a
19
unix`swtch_to+0xe6
unix`idle+0xb8
unix`thread_start+0x8
24
unix`swtch+0x145
genunix`cv_timedwait_sig_hires+0x1e9
genunix`cv_waituntil_sig+0xba
genunix`poll_common+0x47f
genunix`pollsys+0xea
unix`_sys_sysenter_post_swapgs+0x149
25
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
sata`sata_event_daemon+0xfe
unix`thread_start+0x8
182
unix`swtch+0x145
genunix`cv_timedwait_hires+0xe0
genunix`cv_reltimedwait+0x4f
stmf`stmf_svc_timeout+0x23a
stmf`stmf_svc+0x129
genunix`taskq_thread+0x285
unix`thread_start+0x8
453
unix`swtch+0x145
genunix`cv_wait+0x61
genunix`taskq_thread_wait+0x84
genunix`taskq_thread+0x308
unix`thread_start+0x8
690
unix`swtch+0x145
unix`idle+0xc4
unix`thread_start+0x8
1064
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 0 0,0 3 0,0
ehci#0 | 3 0,0 0 0,0
ehci#1 | 0 0,0 3 0,0
hci1394#0 | 1 0,0 0 0,0
heci#0 | 1 0,0 0 0,0
i915#1 | 1 0,0 0 0,0
pci-ide#0 | 0 0,0 2 0,0
uhci#0 | 1 0,0 0 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 0 0,0 3 0,0
uhci#3 | 1 0,0 0 0,0
uhci#4 | 3 0,0 0 0,0
tty cmdk0 sd0 sd1 sd2 cpu
tin tout kps tps serv kps tps serv kps tps serv kps tps serv us sy wt id
0 0 2 0 15 164 2 11 169 2 10 164 2 11 0 1 0 99
0 47 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
Profiling interrupt: 5840 events in 30.107 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
-------------------------------------------------------------------------------
5795 99% 99% 0.00 1464 cpu[0] i86_mwait
23 0% 100% 0.00 1754 cpu[1] fsflush_do_pages
9 0% 100% 0.00 3766 cpu[0] (usermode)
7 0% 100% 0.00 1619 cpu[1] page_nextn
1 0% 100% 0.00 2717 cpu[1] pcacheset_resolve
1 0% 100% 0.00 2365 cpu[0]+11 restore_mstate
1 0% 100% 0.00 4830 cpu[1] page_unlock
1 0% 100% 0.00 1157 cpu[1] do_splx
1 0% 100% 0.00 3023 cpu[1] htable_release
1 0% 100% 0.00 2268 cpu[1] hment_compare
-------------------------------------------------------------------------------
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 2 0 2 389 178 462 6 21 11 0 74 0 1 0 99
1 1 0 2 225 139 417 4 21 15 0 64 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 309 106 168 0 8 4 0 102 0 0 0 100
1 0 0 1 103 50 121 0 8 6 0 44 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 310 106 134 0 6 1 0 85 0 0 0 100
1 0 0 2 138 62 158 0 5 0 0 49 0 0 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 307 106 135 0 7 1 0 79 0 0 0 100
1 0 0 0 138 61 151 0 8 1 0 27 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 1 0 0 307 106 144 0 5 1 0 109 0 0 0 100
1 0 0 0 128 59 145 0 6 2 0 36 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 306 105 137 0 4 1 0 79 0 0 0 100
1 0 0 0 130 63 141 0 4 0 0 23 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 306 105 144 0 6 3 0 82 0 0 0 100
1 0 0 0 134 61 162 0 6 0 0 71 0 0 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 308 106 164 0 5 1 0 75 0 0 0 100
1 0 0 1 101 43 106 0 5 0 0 29 0 0 0 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 309 107 133 0 8 1 0 81 0 0 0 100
1 0 0 0 139 55 149 0 8 1 0 45 0 1 0 99
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 1 311 107 149 0 6 1 0 69 0 0 0 100
1 0 0 1 116 51 125 0 6 0 0 40 0 0 0 100
OpenIndiana PowerTOP version 1.2 (C) 2009 Intel Corporation
Collecting data for 30.00 second(s)
C-states (idle power) Avg Residency
C0 (cpu running) (0.0%)
C1 4.8ms (100.0%)
P-states (frequencies)
1200 Mhz 88.9%
1800 Mhz 11.1%
Wakeups-from-idle per second: 484.2 interval: 30.0s
37.9% (183.6) sched :<xcalls> unix`dtrace_xcall_func
20.7% (100.0)<kernel> :genunix`clock
16.6% ( 80.2)<kernel> :genunix`cv_wakeup
10.3% ( 50.0)<kernel> :SDC`sysdc_update
4.0% ( 19.3)<kernel> :uhci`uhci_handle_root_hub_status_change
2.1% ( 10.0)<kernel> :ata`ghd_timeout
1.6% ( 7.7)<kernel> :ehci`ehci_handle_root_hub_status_change
1.0% ( 5.0)<kernel> :uhci`uhci_cmd_timeout_hdlr
0.9% ( 4.3)<interrupt> :pci-ide#0
0.8% ( 4.0)<kernel> :genunix`schedpaging
0.4% ( 2.0)<kernel> :cpudrv`cpudrv_monitor_disp
0.3% ( 1.7)<kernel> :i915`i915_gem_retire_work_handler
0.3% ( 1.7)<interrupt> :e1000g#0
0.3% ( 1.6) intrd :<xcalls> unix`hati_demap_func
0.2% ( 1.0)<kernel> :TS`ts_update
0.2% ( 1.0)<kernel> :e1000g`e1000g_local_timer
0.2% ( 1.0)<kernel> :genunix`clock_realtime_fire
0.2% ( 1.0)<interrupt> :uhci#0
0.2% ( 1.0)<interrupt> :uhci#1
0.2% ( 1.0)<interrupt> :ehci#0
0.2% ( 1.0)<interrupt> :uhci#3
0.2% ( 1.0)<interrupt> :uhci#2
0.2% ( 1.0)<interrupt> :ehci#1
0.2% ( 1.0)<interrupt> :uhci#4
0.2% ( 0.7) sched :<xcalls> unix`hati_demap_func
0.1% ( 0.6) sched :<xcalls> unix`speedstep_pstate_transition
0.1% ( 0.5)<kernel> :heci`heci_wd_timer
0.1% ( 0.3)<kernel> :kcf`rnd_handler
0.0% ( 0.2)<kernel> :ahci`ahci_watchdog_handler
0.0% ( 0.2) zpool-rpool :<xcalls> unix`speedstep_pstate_transition
0.0% ( 0.1)<kernel> :swrand`rnd_handler
0.0% ( 0.1)<kernel> :ip`igmp_slowtimo
0.0% ( 0.1)<kernel> :ip`squeue_fire
0.0% ( 0.1)<kernel> :genunix`vmem_update
0.0% ( 0.1) smbd :<xcalls> unix`speedstep_pstate_transition
0.0% ( 0.1)<kernel> :ip`tcp_timer_callback
0.0% ( 0.1)<kernel> :genunix`kmem_update
0.0% ( 0.0)<kernel> :genunix`realitexpire
powertop: battery kstat not found (-1)
no ACPI power usage estimate available
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1777 gdm 187M 35M sleep 59 0 0:00:16 0,1% gdm-simple-gree/1
1909 root 4568K 3364K cpu0 59 0 0:00:00 0,0% prstat/1
1713 root 395M 84M sleep 59 0 0:00:10 0,0% Xorg/1
389 root 6768K 4280K sleep 59 0 0:00:05 0,0% nscd/30
721 root 1984K 1120K sleep 59 0 0:00:00 0,0% smbiod-svc/2
738 root 14M 9228K sleep 59 0 0:00:13 0,0% smbd/16
692 root 19M 12M sleep 59 0 0:00:05 0,0% fmd/27
678 root 4168K 2320K sleep 59 0 0:00:00 0,0% rmvolmgr/1
644 daemon 5452K 3284K sleep 59 0 0:00:01 0,0% idmapd/5
626 root 1620K 980K sleep 59 0 0:00:00 0,0% utmpd/1
622 root 4600K 3484K sleep 59 0 0:00:01 0,0% inetd/4
693 root 4216K 2188K sleep 59 0 0:00:00 0,0% syslogd/11
635 root 2880K 1640K sleep 59 0 0:00:00 0,0% automountd/2
636 root 2952K 1548K sleep 59 0 0:00:00 0,0% automountd/4
612 root 2164K 1352K sleep 59 0 0:00:00 0,0% sac/1
615 root 1988K 1440K sleep 59 0 0:00:00 0,0% ttymon/1
1776 gdm 113M 13M sleep 59 0 0:00:00 0,0% metacity/1
603 daemon 3204K 1304K sleep 59 0 0:00:00 0,0% rpcbind/1
599 root 4400K 3348K sleep 59 0 0:00:00 0,0% console-kit-dae/2
1769 gdm 7772K 5872K sleep 59 0 0:00:00 0,0% at-spi-registry/1
698 root 4388K 1920K sleep 59 0 0:00:00 0,0% sshd/1
796 root 7252K 3660K sleep 59 0 0:00:02 0,0% miniserv.pl/1
700 noaccess 2520K 1624K sleep 59 0 0:00:01 0,0% mdnsd/1
1863 gernot 7748K 4852K sleep 59 0 0:00:00 0,0% sshd/1
235 root 2164K 1612K sleep 59 0 0:00:00 0,0% powerd/4
561 root 2728K 1692K sleep 59 0 0:00:03 0,0% in.routed/1
330 root 3408K 2092K sleep 59 0 0:00:00 0,0% dbus-daemon/1
764 root 6012K 1492K sleep 59 0 0:00:00 0,0% cnid_metad/1
555 root 3496K 2128K sleep 59 0 0:00:03 0,0% hald-addon-acpi/1
502 root 2388K 1184K sleep 59 0 0:00:00 0,0% in.ndpd/1
715 root 2164K 1344K sleep 59 0 0:00:00 0,0% dns-sd/1
1755 gdm 15M 10M sleep 59 0 0:00:00 0,0% gnome-session/2
249 root 4616K 3308K sleep 59 0 0:00:03 0,0% devfsadm/6
316 root 1980K 956K sleep 59 0 0:00:00 0,0% iscsid/2
551 root 4292K 2736K sleep 59 0 0:00:00 0,0% hald-addon-cpuf/1
714 root 2164K 1344K sleep 59 0 0:00:00 0,0% dns-sd/1
766 root 8008K 3144K sleep 59 0 0:00:00 0,0% afpd/1
563 root 2832K 2076K sleep 59 0 0:00:00 0,0% hald-addon-stor/3
459 root 3784K 2308K sleep 59 0 0:00:00 0,0% hald-runner/1
456 root 7436K 6080K sleep 59 0 0:00:04 0,0% hald/4
543 root 3896K 2340K sleep 59 0 0:00:00 0,0% hald-addon-netw/1
1712 root 6196K 4344K sleep 59 0 0:00:00 0,0% gdm-simple-slav/2
168 root 6020K 3184K sleep 59 0 0:00:00 0,0% syseventd/18
1779 root 4236K 3100K sleep 59 0 0:00:00 0,0% gdm-session-wor/1
684 root 2340K 1456K sleep 59 0 0:00:00 0,0% ttymon/1
346 root 6264K 3592K sleep 59 0 0:00:11 0,0% ntpd/1
252 root 3800K 2784K sleep 59 0 0:00:00 0,0% picld/4
124 root 0K 0K sleep 99 -20 0:13:39 0,0% zpool-tank/138
46 netcfg 3440K 2540K sleep 59 0 0:00:00 0,0% netcfgd/3
178 root 2484K 1500K sleep 60 -20 0:00:00 0,0% zonestatd/5
579 root 2256K 1392K sleep 59 0 0:00:00 0,0% cron/1
777 root 6668K 5564K sleep 59 0 0:00:16 0,0% intrd/1
49 netadm 3860K 2744K sleep 59 0 0:00:01 0,0% ipmgmtd/4
47 root 2980K 2024K sleep 59 0 0:00:00 0,0% dlmgmtd/6
12 root 14M 13M sleep 59 0 0:00:19 0,0% svc.configd/17
10 root 13M 11M sleep 59 0 0:00:05 0,0% svc.startd/13
143 root 2548K 1628K sleep 59 0 0:00:00 0,0% pfexecd/3
1 root 2720K 1852K sleep 59 0 0:00:00 0,0% init/1
6 root 0K 0K sleep 99 -20 0:00:02 0,0% zpool-rpool/138
Total: 77 processes, 528 lwps, load averages: 0,01, 0,10, 0,11
last pid: 1917; load avg: 0.01, 0.06, 0.10; up 2+18:06:01 17:59:17
79 processes: 78 sleeping, 1 on cpu
CPU states: 99.7% idle, 0.0% user, 0.3% kernel, 0.0% iowait, 0.0% swap
Kernel: 300 ctxsw, 1 trap, 457 intr, 122 syscall
Memory: 8118M phys mem, 1110M free mem, 4058M total swap, 4058M free swap
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
3 root 1 60 -20 0K 0K sleep 12:05 0.14% fsflush
1777 gdm 1 59 0 187M 35M sleep 0:16 0.06% gdm-simple-gree
561 root 1 59 0 2728K 1692K sleep 0:03 0.04% in.routed
1916 root 1 59 0 3884K 2208K cpu/0 0:00 0.03% top
1713 root 1 59 0 395M 84M sleep 0:10 0.03% Xorg
346 root 1 59 0 6264K 3592K sleep 0:11 0.00% ntpd
1778 gdm 1 59 0 172M 18M sleep 0:00 0.00% gnome-power-man
1863 gernot 1 59 0 7788K 4868K sleep 0:00 0.00% sshd
738 root 16 59 0 14M 9228K sleep 0:13 0.00% smbd
10 root 13 59 0 13M 11M sleep 0:05 0.00% svc.startd
622 root 4 59 0 4600K 3484K sleep 0:01 0.00% inetd
692 root 26 59 0 19M 12M sleep 0:05 0.00% fmd
747 root 1 59 0 6020K 2168K sleep 0:02 0.00% sendmail
249 root 6 59 0 4616K 3308K sleep 0:03 0.00% devfsadm
700 noaccess 1 59 0 2520K 1624K sleep 0:01 0.00% mdnsd
124 root 138 99 -20 0K 0K sleep 13:39 0.00% zpool-tank
12 root 17 59 0 14M 13M sleep 0:19 0.00% svc.configd
777 root 1 59 0 6668K 5564K sleep 0:16 0.00% intrd
389 root 30 59 0 6768K 4280K sleep 0:05 0.00% nscd
456 root 4 59 0 7436K 6080K sleep 0:04 0.00% hald
555 root 1 59 0 3496K 2128K sleep 0:03 0.00% hald-addon-acpi
6 root 138 99 -20 0K 0K sleep 0:02 0.00% zpool-rpool
796 root 1 59 0 7252K 3660K sleep 0:02 0.00% miniserv.pl
644 daemon 5 59 0 5452K 3284K sleep 0:01 0.00% idmapd
49 netadm 4 59 0 3860K 2744K sleep 0:01 0.00% ipmgmtd
2 root 2 98 -20 0K 0K sleep 0:00 0.00% pageout
1859 root 1 60 0 9796K 2384K sleep 0:00 0.00% afpd
178 root 5 60 -20 2484K 1500K sleep 0:00 0.00% zonestatd
4 root 3 60 -20 0K 0K sleep 0:00 0.00% kcfpoold
1771 gdm 1 59 0 202M 49M sleep 0:00 0.00% gnome-settings-
1776 gdm 1 59 0 113M 13M sleep 0:00 0.00% metacity
1755 gdm 2 59 0 15M 10M sleep 0:00 0.00% gnome-session
1768 gdm 1 59 0 8196K 6992K sleep 0:00 0.00% gconfd-2
1769 gdm 1 59 0 7772K 5872K sleep 0:00 0.00% at-spi-registry
1856 root 1 59 0 19M 5464K sleep 0:00 0.00% cnid_dbd
1900 root 1 59 0 19M 5156K sleep 0:00 0.00% cnid_dbd
1773 gdm 2 59 0 7164K 5152K sleep 0:00 0.00% bonobo-activati
1712 root 2 59 0 6196K 4344K sleep 0:00 0.00% gdm-simple-slav
811 root 2 59 0 4684K 3524K sleep 0:00 0.00% gdm-binary
599 root 2 59 0 4400K 3348K sleep 0:00 0.00% console-kit-dae
1862 root 1 59 0 5804K 3192K sleep 0:00 0.00% sshd
168 root 17 59 0 6020K 3184K sleep 0:00 0.00% syseventd
766 root 1 59 0 8008K 3144K sleep 0:00 0.00% afpd
1779 root 1 59 0 4236K 3100K sleep 0:00 0.00% gdm-session-wor
1775 gdm 1 59 0 4116K 2968K sleep 0:00 0.00% gvfsd
252 root 4 59 0 3800K 2784K sleep 0:00 0.00% picld
551 root 1 59 0 4292K 2736K sleep 0:00 0.00% hald-addon-cpuf
46 netcfg 3 59 0 3440K 2540K sleep 0:00 0.00% netcfgd
1878 root 1 59 0 3816K 2520K sleep 0:00 0.00% bash
1866 gernot 1 59 0 3816K 2516K sleep 0:00 0.00% bash
543 root 1 59 0 3896K 2340K sleep 0:00 0.00% hald-addon-netw
678 root 1 59 0 4168K 2320K sleep 0:00 0.00% rmvolmgr
459 root 1 59 0 3784K 2308K sleep 0:00 0.00% hald-runner
693 root 11 59 0 4216K 2188K sleep 0:00 0.00% syslogd
330 root 1 59 0 3408K 2092K sleep 0:00 0.00% dbus-daemon
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 5195360 2094636 1 4 0 0 0 0 1 0 2 2 2 616 140 882 0 1 99
0 0 0 4236680 1137564 4 11 0 0 0 0 0 0 0 0 0 434 183 323 0 0 100
0 0 0 4236600 1137512 0 0 0 0 0 0 0 0 0 0 0 433 120 289 0 0 100
0 0 0 4236600 1137512 0 0 0 0 0 0 0 0 0 0 0 434 106 288 0 0 100
0 0 0 4236600 1137512 0 0 0 0 0 0 0 0 0 0 0 434 99 284 0 0 100
0 0 0 4236600 1137528 0 0 0 0 0 0 0 0 0 0 0 417 109 280 0 0 100
0 0 0 4236600 1137560 0 0 0 0 0 0 0 0 0 0 0 432 149 306 0 0 100
0 0 0 4236600 1137560 0 0 0 0 0 0 0 0 0 0 0 451 124 305 0 0 100
0 0 0 4236600 1137560 0 0 0 0 0 0 0 0 0 0 0 453 104 294 0 0 100
0 0 0 4236584 1137544 0 1 0 0 0 0 0 0 0 0 0 439 140 298 0 0 100
interrupt total rate
--------------------------------
clock 23805444 100
audiohd 0 0
ecppc0 0 0
--------------------------------
Total 23805444 100
0 swap ins
0 swap outs
0 pages swapped in
0 pages swapped out
836130 total address trans. faults taken
3 page ins
0 page outs
3 pages paged in
0 pages paged out
168367 total reclaims
168367 reclaims from free list
0 micro (hat) faults
836130 minor (as) faults
3 major faults
137112 copy-on-write faults
384370 zero fill page faults
145404 pages examined by the clock daemon
0 revolutions of the clock hand
0 pages freed by the clock daemon
1482 forks
435 vforks
1365 execs
209981025 cpu context switches
146541748 device interrupts
1697366 traps
33376874 system calls
3003291 total name lookups (cache hits 92%)
48690 user cpu
430826 system cpu
47131950 idle cpu
0 wait cpu
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 20:29:05 UTC
Permalink
Another try to get /var/adm/messages out. This time I didn't zip it,
just attached it as messages.txt.

Regards,
Gernot Wolf
Post by Gernot Wolf
Ok, for some reason this attachement refuses to go out :( Have to figure
that out...
Regards,
Gernot Wolf
Post by Gernot Wolf
Ooops, something went wrong with my attachement. I'll try again...
Regards,
Gernot Wolf
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
@[execname]=count()}'
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my
knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 19:25:20 UTC
Permalink
Attachment is missing...

I'd like to see the whole things, but in the mean while

#grep -i acpi /var/adm/messages

Anything?

Mike
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 20:20:10 UTC
Permalink
Grep output attached. Hopefully this attachement will go through ;)

Regards,
Gernot Wolf
Post by Michael Stapleton
Attachment is missing...
I'd like to see the whole things, but in the mean while
#grep -i acpi /var/adm/messages
Anything?
Mike
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 20:33:34 UTC
Permalink
Is this running in a VM?

Mike
Post by Gernot Wolf
Grep output attached. Hopefully this attachement will go through ;)
Regards,
Gernot Wolf
Post by Michael Stapleton
Attachment is missing...
I'd like to see the whole things, but in the mean while
#grep -i acpi /var/adm/messages
Anything?
Mike
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 20:40:13 UTC
Permalink
No. Why?

Regards,
Gernot Wolf
Post by Michael Stapleton
Is this running in a VM?
Mike
Post by Gernot Wolf
Grep output attached. Hopefully this attachement will go through ;)
Regards,
Gernot Wolf
Post by Michael Stapleton
Attachment is missing...
I'd like to see the whole things, but in the mean while
#grep -i acpi /var/adm/messages
Anything?
Mike
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 20:49:29 UTC
Permalink
Just checking ;-)

Night!

Mike
Post by Gernot Wolf
No. Why?
Regards,
Gernot Wolf
Post by Michael Stapleton
Is this running in a VM?
Mike
Post by Gernot Wolf
Grep output attached. Hopefully this attachement will go through ;)
Regards,
Gernot Wolf
Post by Michael Stapleton
Attachment is missing...
I'd like to see the whole things, but in the mean while
#grep -i acpi /var/adm/messages
Anything?
Mike
Post by Gernot Wolf
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
Post by Michael Stapleton
Gernot,
is there anything suspicious in /var/adm/messages?
Michael
On Thu, Oct 20, 2011 at 20:07, Michael Stapleton
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 18:57:26 UTC
Permalink
As far as I was able to understand the output of my initial diagnostic
commands, it's indeed the kernel that causes the cpu load.

I've attached the dtrace results, as they are rather lengthy. I've run
each of them a couple of seconds. What got my attention on the first
glance (as far as I'm able to understand anything of these figures), was
that "acpi" showed up a lot in the output of the second dtrace command.

As Steve already pointed out, the lockstat results also showed acpi
debug tracing functions. Which isn't the case when I run the lockstat on
the OI box at my office.

Can the power supply have anything to do with all of this? I had to
replace it a few days ago, because the old one had a meltdown...

Regards,
Gernot Wolf
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load constantly
is a waste of energy and surely a rather unhealthy stress on the hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Rennie Allen
2011-10-20 17:27:17 UTC
Permalink
Dontchya just love dtrace?


On 10/20/11 10:22 AM, "Michael Stapleton"
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were >1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 17:47:13 UTC
Permalink
Sure do :-)


People tend to think only about ZFS and maybe Zones. They don't
understand dtrace and resource management.

Solaris is much more than ZFS, but you really have to know what you are
doing to appreciate it.

The true strength of Solaris is server side in the hands of
professionals.

There has been a bit of back and forth about Linux and OpenIndiana
lately, personaly I think we should focus on our strengths.



Mike
Post by Rennie Allen
Dontchya just love dtrace?
On 10/20/11 10:22 AM, "Michael Stapleton"
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were >1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Steve Gonczi
2011-10-20 17:55:13 UTC
Permalink
Your lockstat output fingers Acpi debug tracing functions.
I wonder why these are running in the first place.


Steve

----- Original Message -----
Hello all,

I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. ....
Gernot Wolf
2011-10-20 20:35:47 UTC
Permalink
Yes, I noticed that to when I compared the lockstat output on my OI box
with that on the OI box at my office. There no Acpi debug tracing
functions are shown at all...

Mike made further suggestions concerning apci, but that will have to
wait for tomorrow. My bed is calling my name ;)

Regards,
Gernot Wolf
Post by Steve Gonczi
Your lockstat output fingers Acpi debug tracing functions.
I wonder why these are running in the first place.
Steve
----- Original Message -----
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. ....
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Rennie Allen
2011-10-20 18:02:24 UTC
Permalink
Sched is the scheduler itself. How long did you let this run? If only
for a couple of seconds, then that number is high, but not ridiculous for
a loaded system, so I think that this output rules out a high context
switch rate.

Try this command to see if some process is making an excessive number of
syscalls:

dtrace -n 'syscall:::entry { @[execname]=count()}'

If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 18:23:16 UTC
Permalink
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.


***@tintenfass:~# intrstat 30

device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 4 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 2 0,0
i8042#1 | 0 0,0 4 0,0
i915#1 | 0 0,0 2 0,0
pci-ide#0 | 15 0,1 0 0,0
uhci#0 | 0 0,0 2 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 2 0,0
uhci#4 | 0 0,0 4 0,0

device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 3 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 1 0,0
i8042#1 | 0 0,0 6 0,0
i915#1 | 0 0,0 1 0,0
pci-ide#0 | 3 0,0 0 0,0
uhci#0 | 0 0,0 1 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 1 0,0
uhci#4 | 0 0,0 3 0,0

***@tintenfass:~# vmstat 5 10
kthr memory page disk faults
cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us
sy id
0 0 0 4243840 1145720 1 6 0 0 0 0 2 0 1 1 1 9767 121 37073 0
54 46
0 0 0 4157824 1059796 4 11 0 0 0 0 0 0 0 0 0 9752 119 37132 0
54 46
0 0 0 4157736 1059752 0 0 0 0 0 0 0 0 0 0 0 9769 113 37194 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9682 104 36941 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9769 105 37208 0
54 46
0 0 0 4157728 1059772 0 1 0 0 0 0 0 0 0 0 0 9741 159 37104 0
54 46
0 0 0 4157728 1059772 0 0 0 0 0 0 0 0 0 0 0 9695 127 36931 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9762 105 37188 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9723 102 37058 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9774 105 37263 0
54 46

Mike
Post by Rennie Allen
Sched is the scheduler itself. How long did you let this run? If only
for a couple of seconds, then that number is high, but not ridiculous for
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive number of
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Schuster
2011-10-20 18:25:32 UTC
Permalink
Hi,

just found this:
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html

does it help?

On Thu, Oct 20, 2011 at 20:23, Michael Stapleton
Post by Michael Stapleton
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.
     device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
   e1000g#0 |         1  0,0         0  0,0
     ehci#0 |         0  0,0         4  0,0
     ehci#1 |         3  0,0         0  0,0
  hci1394#0 |         0  0,0         2  0,0
    i8042#1 |         0  0,0         4  0,0
     i915#1 |         0  0,0         2  0,0
  pci-ide#0 |        15  0,1         0  0,0
     uhci#0 |         0  0,0         2  0,0
     uhci#1 |         0  0,0         0  0,0
     uhci#2 |         3  0,0         0  0,0
     uhci#3 |         0  0,0         2  0,0
     uhci#4 |         0  0,0         4  0,0
     device |      cpu0 %tim      cpu1 %tim
-------------+------------------------------
   e1000g#0 |         1  0,0         0  0,0
     ehci#0 |         0  0,0         3  0,0
     ehci#1 |         3  0,0         0  0,0
  hci1394#0 |         0  0,0         1  0,0
    i8042#1 |         0  0,0         6  0,0
     i915#1 |         0  0,0         1  0,0
  pci-ide#0 |         3  0,0         0  0,0
     uhci#0 |         0  0,0         1  0,0
     uhci#1 |         0  0,0         0  0,0
     uhci#2 |         3  0,0         0  0,0
     uhci#3 |         0  0,0         1  0,0
     uhci#4 |         0  0,0         3  0,0
 kthr      memory            page            disk          faults
cpu
 r b w   swap  free  re  mf pi po fr de sr cd s0 s1 s2   in   sy   cs us
sy id
 0 0 0 4243840 1145720 1  6  0  0  0  0  2  0  1  1  1 9767  121 37073 0
54 46
 0 0 0 4157824 1059796 4 11  0  0  0  0  0  0  0  0  0 9752  119 37132 0
54 46
 0 0 0 4157736 1059752 0  0  0  0  0  0  0  0  0  0  0 9769  113 37194 0
54 46
 0 0 0 4157744 1059788 0  0  0  0  0  0  0  0  0  0  0 9682  104 36941 0
54 46
 0 0 0 4157744 1059788 0  0  0  0  0  0  0  0  0  0  0 9769  105 37208 0
54 46
 0 0 0 4157728 1059772 0  1  0  0  0  0  0  0  0  0  0 9741  159 37104 0
54 46
 0 0 0 4157728 1059772 0  0  0  0  0  0  0  0  0  0  0 9695  127 36931 0
54 46
 0 0 0 4157744 1059788 0  0  0  0  0  0  0  0  0  0  0 9762  105 37188 0
54 46
 0 0 0 4157744 1059788 0  0  0  0  0  0  0  0  0  0  0 9723  102 37058 0
54 46
 0 0 0 4157744 1059788 0  0  0  0  0  0  0  0  0  0  0 9774  105 37263 0
54 46
Mike
Sched is the scheduler itself.  How long did you let this run?  If only
for a couple of seconds, then that number is high, but not ridiculous for
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive number of
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
  ipmgmtd                                                           1
  gconfd-2                                                          2
  gnome-settings-d                                                  2
  idmapd                                                            2
  inetd                                                             2
  miniserv.pl                                                       2
  netcfgd                                                           2
  nscd                                                              2
  ospm-applet                                                       2
  ssh-agent                                                         2
  sshd                                                              2
  svc.startd                                                        2
  intrd                                                             3
  afpd                                                              4
  mdnsd                                                             4
  gnome-power-mana                                                  5
  clock-applet                                                      7
  sendmail                                                          7
  xscreensaver                                                      7
  fmd                                                               9
  fsflush                                                          11
  ntpd                                                             11
  updatemanagernot                                                 13
  isapython2.6                                                     14
  devfsadm                                                         20
  gnome-terminal                                                   20
  dtrace                                                           23
  mixer_applet2                                                    25
  smbd                                                             39
  nwam-manager                                                     60
  svc.configd                                                      79
  Xorg                                                            100
  sched                                                        394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
  automountd                                                        1
  ipmgmtd                                                           1
  idmapd                                                            2
  in.routed                                                         2
  init                                                              2
  miniserv.pl                                                       2
  netcfgd                                                           2
  ssh-agent                                                         2
  sshd                                                              2
  svc.startd                                                        2
  fmd                                                               3
  hald                                                              3
  inetd                                                             3
  intrd                                                             3
  hald-addon-acpi                                                   4
  nscd                                                              4
  gnome-power-mana                                                  5
  sendmail                                                          5
  mdnsd                                                             6
  devfsadm                                                          8
  xscreensaver                                                      9
  fsflush                                                          10
  ntpd                                                             14
  updatemanagernot                                                 16
  mixer_applet2                                                    21
  isapython2.6                                                     22
  dtrace                                                           24
  gnome-terminal                                                   24
  smbd                                                             39
  nwam-manager                                                     58
  zpool-rpool                                                      65
  svc.configd                                                      79
  Xorg                                                             82
  sched                                                        369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
--
Michael Schuster
http://recursiverambli
Michael Stapleton
2011-10-20 18:33:07 UTC
Permalink
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.

We know CS rate in vmstat is high, we know Sys time is high, we know
syscall rate is low, we know it is not a user process therefor it is
kernel. Likely a driver.

So what kernel code is running the most?

What's causing that code to run?

Does that code belong to a driver?


Mike
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
On Thu, Oct 20, 2011 at 20:23, Michael Stapleton
Post by Michael Stapleton
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 4 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 2 0,0
i8042#1 | 0 0,0 4 0,0
i915#1 | 0 0,0 2 0,0
pci-ide#0 | 15 0,1 0 0,0
uhci#0 | 0 0,0 2 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 2 0,0
uhci#4 | 0 0,0 4 0,0
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 3 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 1 0,0
i8042#1 | 0 0,0 6 0,0
i915#1 | 0 0,0 1 0,0
pci-ide#0 | 3 0,0 0 0,0
uhci#0 | 0 0,0 1 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 1 0,0
uhci#4 | 0 0,0 3 0,0
kthr memory page disk faults
cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us
sy id
0 0 0 4243840 1145720 1 6 0 0 0 0 2 0 1 1 1 9767 121 37073 0
54 46
0 0 0 4157824 1059796 4 11 0 0 0 0 0 0 0 0 0 9752 119 37132 0
54 46
0 0 0 4157736 1059752 0 0 0 0 0 0 0 0 0 0 0 9769 113 37194 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9682 104 36941 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9769 105 37208 0
54 46
0 0 0 4157728 1059772 0 1 0 0 0 0 0 0 0 0 0 9741 159 37104 0
54 46
0 0 0 4157728 1059772 0 0 0 0 0 0 0 0 0 0 0 9695 127 36931 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9762 105 37188 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9723 102 37058 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9774 105 37263 0
54 46
Mike
Post by Rennie Allen
Sched is the scheduler itself. How long did you let this run? If only
for a couple of seconds, then that number is high, but not ridiculous for
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive number of
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Schuster
2011-10-20 18:37:05 UTC
Permalink
On Thu, Oct 20, 2011 at 20:33, Michael Stapleton
Post by Michael Stapleton
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.
if you're answering my question: I'm not guessing that much: I looked
at lockstat output, and right there at the top we see i86_mwait
consuming 45%(!) ... so, popped that into google, the link I quote is
the first to appear, and the description matches well enough that I'd
give it a try.

Since Gernot is seeing the issue, maybe he wants to pitch in here?

regards
Michael
Post by Michael Stapleton
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
--
Michael Schuster
http://recursiveramblings.wordpress.com/
Michael Stapleton
2011-10-20 18:55:23 UTC
Permalink
You might be right.

But 45% of what?

Profiling interrupt: 5844 events in 30.123 seconds (194 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
2649 45% 45% 0.00 1070 cpu[1]
i86_mwait
358 6% 51% 0.00 963 cpu[0]
AcpiDebugPrint
333 6% 57% 0.00 960 cpu[0]
AcpiUtTrackStackPtr

2649 times in 30 seconds totaling 1070 ns does not seem like much to me.

My idle laptop shows:

Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
5441 93% 93% 0.00 3132 cpu[0]
i86_mwait


Mike
Post by Michael Schuster
On Thu, Oct 20, 2011 at 20:33, Michael Stapleton
Post by Michael Stapleton
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.
if you're answering my question: I'm not guessing that much: I looked
at lockstat output, and right there at the top we see i86_mwait
consuming 45%(!) ... so, popped that into google, the link I quote is
the first to appear, and the description matches well enough that I'd
give it a try.
Since Gernot is seeing the issue, maybe he wants to pitch in here?
regards
Michael
Post by Michael Stapleton
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
Michael Schuster
2011-10-20 18:57:05 UTC
Permalink
On Thu, Oct 20, 2011 at 20:55, Michael Stapleton
Post by Michael Stapleton
You might be right.
But 45% of what?
Profiling interrupt: 5844 events in 30.123 seconds (194 events/sec)
Count indv cuml rcnt     nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
 2649 45%  45% 0.00     1070 cpu[1]
i86_mwait
 358   6%  51% 0.00      963 cpu[0]
AcpiDebugPrint
 333   6%  57% 0.00      960 cpu[0]
AcpiUtTrackStackPtr
2649 times in 30 seconds totaling 1070 ns does not seem like much to me.
Count indv cuml rcnt     nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
 5441 93%  93% 0.00     3132 cpu[0]
i86_mwait
hmm ... good point.

Gernot? ;-)
--
Michael Schuster
http://recursiveramblings.wordpress.com/
Gernot Wolf
2011-10-20 20:01:03 UTC
Permalink
Post by Michael Schuster
On Thu, Oct 20, 2011 at 20:55, Michael Stapleton
Post by Michael Stapleton
You might be right.
But 45% of what?
Profiling interrupt: 5844 events in 30.123 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
2649 45% 45% 0.00 1070 cpu[1]
i86_mwait
358 6% 51% 0.00 963 cpu[0]
AcpiDebugPrint
333 6% 57% 0.00 960 cpu[0]
AcpiUtTrackStackPtr
2649 times in 30 seconds totaling 1070 ns does not seem like much to me.
Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
5441 93% 93% 0.00 3132 cpu[0]
i86_mwait
hmm ... good point.
Gernot? ;-)
...slowly catching up... ;)

I ran the lockstat command on the OI box at my office, just to have
another set of numbers to compare. Output:

***@victor:~# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 2930 events in 30.208 seconds (97 events/sec)

Count indv cuml rcnt nsec Hottest CPU+PIL Caller
-------------------------------------------------------------------------------
2882 98% 98% 0.00 2035 cpu[0] mach_cpu_idle
10 0% 99% 0.00 8696 cpu[0] (usermode)
5 0% 99% 0.00 4738 cpu[0] mutex_enter
5 0% 99% 0.00 9502 cpu[0] lzjb_compress
1 0% 99% 0.00 2647 cpu[0] vsd_free
1 0% 99% 0.00 7323 cpu[0] vn_rele_dnlc
1 0% 99% 0.00 11953 cpu[0] vmem_xfree
1 0% 99% 0.00 16401 cpu[0] kmem_partial_slab_cmp
1 0% 99% 0.00 13537 cpu[0] list_remove
1 0% 99% 0.00 13100 cpu[0] copy_pattern
1 0% 99% 0.00 2154 cpu[0]+11 disp_lock_enter_high
1 0% 99% 0.00 2098 cpu[0] avl_destroy_nodes
1 0% 99% 0.00 9981 cpu[0] avl_find
1 0% 99% 0.00 12003 cpu[0] avl_rotation
1 0% 99% 0.00 1933 cpu[0]+11 pg_ev_thread_swtch
1 0% 99% 0.00 12921 cpu[0] rw_enter
1 0% 99% 0.00 15171 cpu[0] rw_destroy
1 0% 100% 0.00 8611 cpu[0] page_lookup_create
1 0% 100% 0.00 12249 cpu[0] mutex_exit
1 0% 100% 0.00 11927 cpu[0] mutex_tryenter
-------------------------------------------------------------------------------

So it seems to be normal to have some kind of idle process on top.
Numbers seem to be roughly comparable...? However, compared to the
lockstat output of my box, there isn't anything with apci here...

Regards,
Gernot Wolf
Rennie Allen
2011-10-20 19:07:50 UTC
Permalink
Profiling is AFAIK statistical, so it might not show the correct number.

Certainly the count of interrupts does not appear high, but if the handler
is spending a long time in the interrupt...

The script I sent measures the time spent in the handler (intrstat might do
this as well, but I just don't know how intrstat works).

On Thu, Oct 20, 2011 at 11:55 AM, Michael Stapleton <
Post by Michael Stapleton
You might be right.
But 45% of what?
Profiling interrupt: 5844 events in 30.123 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
2649 45% 45% 0.00 1070 cpu[1]
i86_mwait
358 6% 51% 0.00 963 cpu[0]
AcpiDebugPrint
333 6% 57% 0.00 960 cpu[0]
AcpiUtTrackStackPtr
2649 times in 30 seconds totaling 1070 ns does not seem like much to me.
Count indv cuml rcnt nsec Hottest CPU+PIL
Caller
-------------------------------------------------------------------------------
5441 93% 93% 0.00 3132 cpu[0]
i86_mwait
Mike
Post by Michael Schuster
On Thu, Oct 20, 2011 at 20:33, Michael Stapleton
Post by Michael Stapleton
Don't know. I don't like to trouble shoot by guess if possible. I
rather
Post by Michael Schuster
Post by Michael Stapleton
follow the evidence to capture the culprit. Use what we know to
discover
Post by Michael Schuster
Post by Michael Stapleton
what we do not know.
if you're answering my question: I'm not guessing that much: I looked
at lockstat output, and right there at the top we see i86_mwait
consuming 45%(!) ... so, popped that into google, the link I quote is
the first to appear, and the description matches well enough that I'd
give it a try.
Since Gernot is seeing the issue, maybe he wants to pitch in here?
regards
Michael
Post by Michael Stapleton
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
Post by Michael Schuster
Post by Michael Stapleton
Post by Michael Schuster
does it help?
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
--
"I hope some animal never bores a hole in my head and lays its eggs in my
brain, because later you might think you're having a good idea but it's just
eggs hatching" - Jack Handy
Gernot Wolf
2011-10-20 19:44:15 UTC
Permalink
Post by Michael Schuster
Since Gernot is seeing the issue, maybe he wants to pitch in here?
He wants, he's just having a hard time keeping up with you guys. You're
so fast, I'm hopelessly lagging behind ;)

Thanks a lot for all the help so far to all of you!

Regards,
Gernot
Gernot Wolf
2011-10-20 19:40:53 UTC
Permalink
Nope. Cpu load remains the same. top shows:

CPU states: 47.5% idle, 0.0% user, 52.5% kernel, 0.0% iowait, 0.0% swap

Regards,
Gernot Wolf
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
On Thu, Oct 20, 2011 at 20:23, Michael Stapleton
Post by Michael Stapleton
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 4 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 2 0,0
i8042#1 | 0 0,0 4 0,0
i915#1 | 0 0,0 2 0,0
pci-ide#0 | 15 0,1 0 0,0
uhci#0 | 0 0,0 2 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 2 0,0
uhci#4 | 0 0,0 4 0,0
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 3 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 1 0,0
i8042#1 | 0 0,0 6 0,0
i915#1 | 0 0,0 1 0,0
pci-ide#0 | 3 0,0 0 0,0
uhci#0 | 0 0,0 1 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 1 0,0
uhci#4 | 0 0,0 3 0,0
kthr memory page disk faults
cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us
sy id
0 0 0 4243840 1145720 1 6 0 0 0 0 2 0 1 1 1 9767 121 37073 0
54 46
0 0 0 4157824 1059796 4 11 0 0 0 0 0 0 0 0 0 9752 119 37132 0
54 46
0 0 0 4157736 1059752 0 0 0 0 0 0 0 0 0 0 0 9769 113 37194 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9682 104 36941 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9769 105 37208 0
54 46
0 0 0 4157728 1059772 0 1 0 0 0 0 0 0 0 0 0 9741 159 37104 0
54 46
0 0 0 4157728 1059772 0 0 0 0 0 0 0 0 0 0 0 9695 127 36931 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9762 105 37188 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9723 102 37058 0
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9774 105 37263 0
54 46
Mike
Post by Rennie Allen
Sched is the scheduler itself. How long did you let this run? If only
for a couple of seconds, then that number is high, but not ridiculous for
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive number of
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 19:29:05 UTC
Permalink
I let it run (as all the other dtrace commands you guys have given me)
just for a couple of seconds. And no, it's not a loaded system, that's
the problem here. It's just a home NAS...

Here is the dtrace output:

***@tintenfass:/root# dtrace -n 'syscall:::entry { @[execname]=count()}'
dtrace: description 'syscall:::entry ' matched 234 probes
^C

idmapd 1
inetd 1
ipmgmtd 1
netcfgd 1
svc.startd 1
fmd 2
utmpd 2
cnid_dbd 3
gconfd-2 3
miniserv.pl 3
ssh-agent 4
mdnsd 9
devfsadm 12
smbd 13
gnome-power-mana 15
sshd 16
nscd 20
sendmail 20
intrd 22
isapython2.6 24
updatemanagernot 24
mixer_applet2 48
ntpd 60
svc.configd 75
nwam-manager 148
Xorg 602
dtrace 3417

Regards,
Gernot
Post by Rennie Allen
Sched is the scheduler itself. How long did you let this run? If only
for a couple of seconds, then that number is high, but not ridiculous for
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive number of
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some work
to do).
I've already googled the symptoms. This didn't turn up very much useful
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm in
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help here.
I've run several diagnostic commands like top, powertop, lockstat etc.
and attached the results to this email (I've zipped the results of
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Rennie Allen
2011-10-20 18:22:31 UTC
Permalink
Try the following script, which will identify any drivers with high
interrupt load

---------------------
#!/usr/sbin/dtrace -s

sdt:::interrupt-start { self->ts = vtimestamp; }
sdt:::interrupt-complete
/self->ts && arg0 != 0/
{
this->devi = (struct dev_info *)arg0;
self->name = this->devi != 0 ?
stringof(`devnamesp[this->devi->devi_major].dn_name) : "?";
this->inst = this->devi != 0 ? this->devi->devi_instance : 0;
@num[self->name, this->inst] = sum(vtimestamp - self->ts);
self->name = 0;
}
sdt:::interrupt-complete { self->ts = 0; }
dtrace:::END
{
printf("%11s %16s\n", "DEVICE", "TIME (ns)");
printa("%10s%-3d %@16d\n", @num);
}
---------------------





On 10/20/11 11:07 AM, "Michael Stapleton"
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
Post by Michael Stapleton
Post by Gernot Wolf
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one
major
Post by Michael Stapleton
Post by Gernot Wolf
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some
work
Post by Michael Stapleton
Post by Gernot Wolf
to do).
I've already googled the symptoms. This didn't turn up very much
useful
Post by Michael Stapleton
Post by Gernot Wolf
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm
in
Post by Michael Stapleton
Post by Gernot Wolf
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help
here.
Post by Michael Stapleton
Post by Gernot Wolf
I've run several diagnostic commands like top, powertop, lockstat
etc.
Post by Michael Stapleton
Post by Gernot Wolf
and attached the results to this email (I've zipped the results of
kstat
Post by Michael Stapleton
Post by Gernot Wolf
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
Post by Michael Stapleton
Post by Gernot Wolf
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
Post by Michael Stapleton
Post by Gernot Wolf
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Post by Michael Stapleton
Post by Gernot Wolf
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 19:18:58 UTC
Permalink
Here are the results (let the script run for a few secs):

CPU ID FUNCTION:NAME
1 2 :END DEVICE TIME (ns)
i9151 22111
heci0 23119
pci-ide0 38700
uhci1 47277
hci13940 50554
uhci3 63145
uhci0 64232
uhci4 103429
ehci1 107272
ehci0 108445
uhci2 112589
e1000g0 160024

Regards,
Gernot Wolf
Post by Rennie Allen
Try the following script, which will identify any drivers with high
interrupt load
---------------------
#!/usr/sbin/dtrace -s
sdt:::interrupt-start { self->ts = vtimestamp; }
sdt:::interrupt-complete
/self->ts&& arg0 != 0/
{
this->devi = (struct dev_info *)arg0;
self->name = this->devi != 0 ?
stringof(`devnamesp[this->devi->devi_major].dn_name) : "?";
this->inst = this->devi != 0 ? this->devi->devi_instance : 0;
@num[self->name, this->inst] = sum(vtimestamp - self->ts);
self->name = 0;
}
sdt:::interrupt-complete { self->ts = 0; }
dtrace:::END
{
printf("%11s %16s\n", "DEVICE", "TIME (ns)");
}
---------------------
On 10/20/11 11:07 AM, "Michael Stapleton"
Post by Michael Stapleton
That rules out userland.
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at my
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
Post by Michael Stapleton
Post by Gernot Wolf
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
So far this OSOL (now OI) box has performed excellently, with one
major
Post by Michael Stapleton
Post by Gernot Wolf
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some
work
Post by Michael Stapleton
Post by Gernot Wolf
to do).
I've already googled the symptoms. This didn't turn up very much
useful
Post by Michael Stapleton
Post by Gernot Wolf
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling cpupm
in
Post by Michael Stapleton
Post by Gernot Wolf
/etc/power.conf, but trying that didn't show any effect on my system.
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help
here.
Post by Michael Stapleton
Post by Gernot Wolf
I've run several diagnostic commands like top, powertop, lockstat
etc.
Post by Michael Stapleton
Post by Gernot Wolf
and attached the results to this email (I've zipped the results of
kstat
Post by Michael Stapleton
Post by Gernot Wolf
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
Post by Michael Stapleton
Post by Gernot Wolf
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
Post by Michael Stapleton
Post by Gernot Wolf
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Post by Michael Stapleton
Post by Gernot Wolf
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Rennie Allen
2011-10-20 21:45:25 UTC
Permalink
Sorry, I was away from my desk for a while. Obviously this isn't an issue,
in fact, if anything, those numbers are surprisingly small.
Post by Gernot Wolf
CPU ID FUNCTION:NAME
1 2 :END DEVICE TIME (ns)
i9151 22111
heci0 23119
pci-ide0 38700
uhci1 47277
hci13940 50554
uhci3 63145
uhci0 64232
uhci4 103429
ehci1 107272
ehci0 108445
uhci2 112589
e1000g0 160024
Regards,
Gernot Wolf
Post by Rennie Allen
Try the following script, which will identify any drivers with high
interrupt load
---------------------
#!/usr/sbin/dtrace -s
sdt:::interrupt-start { self->ts = vtimestamp; }
sdt:::interrupt-complete
/self->ts&& arg0 != 0/
{
this->devi = (struct dev_info *)arg0;
self->name = this->devi != 0 ?
stringof(`devnamesp[this->**devi->devi_major].dn_name) : "?";
this->inst = this->devi != 0 ? this->devi->devi_instance : 0;
@num[self->name, this->inst] = sum(vtimestamp - self->ts);
self->name = 0;
}
sdt:::interrupt-complete { self->ts = 0; }
dtrace:::END
{
printf("%11s %16s\n", "DEVICE", "TIME (ns)");
}
---------------------
On 10/20/11 11:07 AM, "Michael Stapleton"
That rules out userland.
Post by Michael Stapleton
Sched tells me that it is not a user process. If kernel code is
executing on a cpu, tools will report the sched process. The count was
how many times the process was taken off the CPU while dtrace was
running.
Yeah, I've been able to run this diagnostics on another OI box (at my
Post by Gernot Wolf
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just don't
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd 1
gconfd-2 2
gnome-settings-d 2
idmapd 2
inetd 2
miniserv.pl 2
netcfgd 2
nscd 2
ospm-applet 2
ssh-agent 2
sshd 2
svc.startd 2
intrd 3
afpd 4
mdnsd 4
gnome-power-mana 5
clock-applet 7
sendmail 7
xscreensaver 7
fmd 9
fsflush 11
ntpd 11
updatemanagernot 13
isapython2.6 14
devfsadm 20
gnome-terminal 20
dtrace 23
mixer_applet2 25
smbd 39
nwam-manager 60
svc.configd 79
Xorg 100
sched 394078
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd 1
ipmgmtd 1
idmapd 2
in.routed 2
init 2
miniserv.pl 2
netcfgd 2
ssh-agent 2
sshd 2
svc.startd 2
fmd 3
hald 3
inetd 3
intrd 3
hald-addon-acpi 4
nscd 4
gnome-power-mana 5
sendmail 5
mdnsd 6
devfsadm 8
xscreensaver 9
fsflush 10
ntpd 14
updatemanagernot 16
mixer_applet2 21
isapython2.6 22
dtrace 24
gnome-terminal 24
smbd 39
nwam-manager 58
zpool-rpool 65
svc.configd 79
Xorg 82
sched 369939
So, quite obviously there is one executable standing out here, "sched",
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and executable.
Mike
Hello all,
Post by Gernot Wolf
I have a machine here at my home running OpenIndiana oi_151a, which
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to oi_151a.
Post by Gernot Wolf
So far this OSOL (now OI) box has performed excellently, with one
major
exception: Sometimes, after a reboot, the cpu load was about 50-60%,
Post by Gernot Wolf
although the system was doing nothing. Until recently, another reboot
solved the issue.
This does not work any longer. The system has always a cpu load of
50-60% when idle (and higher of course when there is actually some
work
to do).
Post by Gernot Wolf
I've already googled the symptoms. This didn't turn up very much
useful
info, and the few things I found didn't apply to my problem. Most
Post by Gernot Wolf
noticably was this problem which could be solved by disabling cpupm
in
/etc/power.conf, but trying that didn't show any effect on my system.
Post by Gernot Wolf
So I'm finally out of my depth. I have to admit that my knowledge of
Unix is superficial at best, so I decided to try looking for help
here.
Post by Gernot Wolf
I've run several diagnostic commands like top, powertop, lockstat
etc.
and attached the results to this email (I've zipped the results of
kstat
because they were>1MB).
Post by Gernot Wolf
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the high cpu
load. I mention this because I have installed several things on my OI
box like vsftpd, svn, netstat etc. I first thought that this problem
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
Post by Gernot Wolf
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Post by Gernot Wolf
Anyone any ideas...?
Regards,
Gernot Wolf
______________________________**_________________
OpenIndiana-discuss mailing list
http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
______________________________**_________________
OpenIndiana-discuss mailing list
http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
______________________________**_________________
OpenIndiana-discuss mailing list
http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
______________________________**_________________
OpenIndiana-discuss mailing list
http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
______________________________**_________________
OpenIndiana-discuss mailing list
http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
______________________________**_________________
OpenIndiana-discuss mailing list
http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
--
"I hope some animal never bores a hole in my head and lays its eggs in my
brain, because later you might think you're having a good idea but it's just
eggs hatching" - Jack Handy
Rennie Allen
2011-10-20 18:47:44 UTC
Permalink
I'd like to see a run of the script I sent earlier. I don't trust
intrstat (not for any particular reason, other than that I have never used
it)...


On 10/20/11 11:33 AM, "Michael Stapleton"
Post by Michael Stapleton
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.
We know CS rate in vmstat is high, we know Sys time is high, we know
syscall rate is low, we know it is not a user process therefor it is
kernel. Likely a driver.
So what kernel code is running the most?
What's causing that code to run?
Does that code belong to a driver?
Mike
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
On Thu, Oct 20, 2011 at 20:23, Michael Stapleton
Post by Michael Stapleton
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 4 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 2 0,0
i8042#1 | 0 0,0 4 0,0
i915#1 | 0 0,0 2 0,0
pci-ide#0 | 15 0,1 0 0,0
uhci#0 | 0 0,0 2 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 2 0,0
uhci#4 | 0 0,0 4 0,0
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 3 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 1 0,0
i8042#1 | 0 0,0 6 0,0
i915#1 | 0 0,0 1 0,0
pci-ide#0 | 3 0,0 0 0,0
uhci#0 | 0 0,0 1 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 1 0,0
uhci#4 | 0 0,0 3 0,0
kthr memory page disk faults
cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs
us
Post by Michael Stapleton
sy id
0 0 0 4243840 1145720 1 6 0 0 0 0 2 0 1 1 1 9767 121
37073 0
Post by Michael Stapleton
54 46
0 0 0 4157824 1059796 4 11 0 0 0 0 0 0 0 0 0 9752 119
37132 0
Post by Michael Stapleton
54 46
0 0 0 4157736 1059752 0 0 0 0 0 0 0 0 0 0 0 9769 113
37194 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9682 104
36941 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9769 105
37208 0
Post by Michael Stapleton
54 46
0 0 0 4157728 1059772 0 1 0 0 0 0 0 0 0 0 0 9741 159
37104 0
Post by Michael Stapleton
54 46
0 0 0 4157728 1059772 0 0 0 0 0 0 0 0 0 0 0 9695 127
36931 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9762 105
37188 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9723 102
37058 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9774 105
37263 0
Post by Michael Stapleton
54 46
Mike
Post by Rennie Allen
Sched is the scheduler itself. How long did you let this run? If
only
Post by Michael Stapleton
Post by Rennie Allen
for a couple of seconds, then that number is high, but not
ridiculous for
Post by Michael Stapleton
Post by Rennie Allen
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive
number of
Post by Michael Stapleton
Post by Rennie Allen
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at
my
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just
don't
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gconfd-2
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-settings-d
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
idmapd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
inetd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
miniserv.pl
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
netcfgd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nscd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ospm-applet
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ssh-agent
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sshd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.startd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
intrd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
afpd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mdnsd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-power-mana
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
clock-applet
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sendmail
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
xscreensaver
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fmd
9
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fsflush
11
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ntpd
11
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
updatemanagernot
13
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
isapython2.6
14
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
devfsadm
20
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-terminal
20
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace
23
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mixer_applet2
25
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
smbd
39
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nwam-manager
60
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.configd
79
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Xorg
100
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sched
394078
@[execname]=count()}'
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ipmgmtd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
idmapd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
in.routed
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
init
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
miniserv.pl
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
netcfgd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ssh-agent
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sshd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.startd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fmd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
hald
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
inetd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
intrd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
hald-addon-acpi
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nscd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-power-mana
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sendmail
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mdnsd
6
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
devfsadm
8
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
xscreensaver
9
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fsflush
10
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ntpd
14
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
updatemanagernot
16
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mixer_applet2
21
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
isapython2.6
22
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace
24
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-terminal
24
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
smbd
39
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nwam-manager
58
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
zpool-rpool
65
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.configd
79
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Xorg
82
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sched
369939
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
So, quite obviously there is one executable standing out here,
"sched",
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and
executable.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a,
which
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to
oi_151a.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
So far this OSOL (now OI) box has performed excellently, with
one major
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
exception: Sometimes, after a reboot, the cpu load was about
50-60%,
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
although the system was doing nothing. Until recently, another
reboot
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
solved the issue.
This does not work any longer. The system has always a cpu load
of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
50-60% when idle (and higher of course when there is actually
some work
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
to do).
I've already googled the symptoms. This didn't turn up very much
useful
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling
cpupm in
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
/etc/power.conf, but trying that didn't show any effect on my
system.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
So I'm finally out of my depth. I have to admit that my
knowledge of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
Unix is superficial at best, so I decided to try looking for
help here.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
I've run several diagnostic commands like top, powertop,
lockstat etc.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
and attached the results to this email (I've zipped the results
of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the
high cpu
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
load. I mention this because I have installed several things on
my OI
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
box like vsftpd, svn, netstat etc. I first thought that this
problem
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 19:00:20 UTC
Permalink
+1

Mike
Post by Rennie Allen
I'd like to see a run of the script I sent earlier. I don't trust
intrstat (not for any particular reason, other than that I have never used
it)...
On 10/20/11 11:33 AM, "Michael Stapleton"
Post by Michael Stapleton
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.
We know CS rate in vmstat is high, we know Sys time is high, we know
syscall rate is low, we know it is not a user process therefor it is
kernel. Likely a driver.
So what kernel code is running the most?
What's causing that code to run?
Does that code belong to a driver?
Mike
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
On Thu, Oct 20, 2011 at 20:23, Michael Stapleton
Post by Michael Stapleton
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 4 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 2 0,0
i8042#1 | 0 0,0 4 0,0
i915#1 | 0 0,0 2 0,0
pci-ide#0 | 15 0,1 0 0,0
uhci#0 | 0 0,0 2 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 2 0,0
uhci#4 | 0 0,0 4 0,0
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 3 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 1 0,0
i8042#1 | 0 0,0 6 0,0
i915#1 | 0 0,0 1 0,0
pci-ide#0 | 3 0,0 0 0,0
uhci#0 | 0 0,0 1 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 1 0,0
uhci#4 | 0 0,0 3 0,0
kthr memory page disk faults
cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs
us
Post by Michael Stapleton
sy id
0 0 0 4243840 1145720 1 6 0 0 0 0 2 0 1 1 1 9767 121
37073 0
Post by Michael Stapleton
54 46
0 0 0 4157824 1059796 4 11 0 0 0 0 0 0 0 0 0 9752 119
37132 0
Post by Michael Stapleton
54 46
0 0 0 4157736 1059752 0 0 0 0 0 0 0 0 0 0 0 9769 113
37194 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9682 104
36941 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9769 105
37208 0
Post by Michael Stapleton
54 46
0 0 0 4157728 1059772 0 1 0 0 0 0 0 0 0 0 0 9741 159
37104 0
Post by Michael Stapleton
54 46
0 0 0 4157728 1059772 0 0 0 0 0 0 0 0 0 0 0 9695 127
36931 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9762 105
37188 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9723 102
37058 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9774 105
37263 0
Post by Michael Stapleton
54 46
Mike
Post by Rennie Allen
Sched is the scheduler itself. How long did you let this run? If
only
Post by Michael Stapleton
Post by Rennie Allen
for a couple of seconds, then that number is high, but not
ridiculous for
Post by Michael Stapleton
Post by Rennie Allen
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive
number of
Post by Michael Stapleton
Post by Rennie Allen
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at
my
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just
don't
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gconfd-2
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-settings-d
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
idmapd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
inetd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
miniserv.pl
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
netcfgd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nscd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ospm-applet
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ssh-agent
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sshd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.startd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
intrd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
afpd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mdnsd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-power-mana
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
clock-applet
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sendmail
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
xscreensaver
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fmd
9
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fsflush
11
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ntpd
11
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
updatemanagernot
13
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
isapython2.6
14
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
devfsadm
20
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-terminal
20
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace
23
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mixer_applet2
25
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
smbd
39
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nwam-manager
60
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.configd
79
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Xorg
100
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sched
394078
@[execname]=count()}'
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ipmgmtd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
idmapd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
in.routed
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
init
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
miniserv.pl
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
netcfgd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ssh-agent
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sshd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.startd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fmd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
hald
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
inetd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
intrd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
hald-addon-acpi
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nscd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-power-mana
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sendmail
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mdnsd
6
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
devfsadm
8
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
xscreensaver
9
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fsflush
10
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ntpd
14
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
updatemanagernot
16
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mixer_applet2
21
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
isapython2.6
22
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace
24
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-terminal
24
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
smbd
39
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nwam-manager
58
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
zpool-rpool
65
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.configd
79
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Xorg
82
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sched
369939
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
So, quite obviously there is one executable standing out here,
"sched",
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and
executable.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a,
which
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to
oi_151a.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
So far this OSOL (now OI) box has performed excellently, with
one major
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
exception: Sometimes, after a reboot, the cpu load was about
50-60%,
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
although the system was doing nothing. Until recently, another
reboot
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
solved the issue.
This does not work any longer. The system has always a cpu load
of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
50-60% when idle (and higher of course when there is actually
some work
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
to do).
I've already googled the symptoms. This didn't turn up very much
useful
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling
cpupm in
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
/etc/power.conf, but trying that didn't show any effect on my
system.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
So I'm finally out of my depth. I have to admit that my
knowledge of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
Unix is superficial at best, so I decided to try looking for
help here.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
I've run several diagnostic commands like top, powertop,
lockstat etc.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
and attached the results to this email (I've zipped the results
of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the
high cpu
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
load. I mention this because I have installed several things on
my OI
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
box like vsftpd, svn, netstat etc. I first thought that this
problem
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 19:48:23 UTC
Permalink
Results are up, see other post...

Regards,
Gernot Wolf
Post by Gernot Wolf
+1
Mike
Post by Rennie Allen
I'd like to see a run of the script I sent earlier. I don't trust
intrstat (not for any particular reason, other than that I have never used
it)...
On 10/20/11 11:33 AM, "Michael Stapleton"
Post by Michael Stapleton
Don't know. I don't like to trouble shoot by guess if possible. I rather
follow the evidence to capture the culprit. Use what we know to discover
what we do not know.
We know CS rate in vmstat is high, we know Sys time is high, we know
syscall rate is low, we know it is not a user process therefor it is
kernel. Likely a driver.
So what kernel code is running the most?
What's causing that code to run?
Does that code belong to a driver?
Mike
Post by Michael Schuster
Hi,
http://download.oracle.com/docs/cd/E19253-01/820-5245/ghgoc/index.html
does it help?
On Thu, Oct 20, 2011 at 20:23, Michael Stapleton
Post by Michael Stapleton
My understanding is that it is not supposed to be a loaded system. We
want to know what the load is.
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 4 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 2 0,0
i8042#1 | 0 0,0 4 0,0
i915#1 | 0 0,0 2 0,0
pci-ide#0 | 15 0,1 0 0,0
uhci#0 | 0 0,0 2 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 2 0,0
uhci#4 | 0 0,0 4 0,0
device | cpu0 %tim cpu1 %tim
-------------+------------------------------
e1000g#0 | 1 0,0 0 0,0
ehci#0 | 0 0,0 3 0,0
ehci#1 | 3 0,0 0 0,0
hci1394#0 | 0 0,0 1 0,0
i8042#1 | 0 0,0 6 0,0
i915#1 | 0 0,0 1 0,0
pci-ide#0 | 3 0,0 0 0,0
uhci#0 | 0 0,0 1 0,0
uhci#1 | 0 0,0 0 0,0
uhci#2 | 3 0,0 0 0,0
uhci#3 | 0 0,0 1 0,0
uhci#4 | 0 0,0 3 0,0
kthr memory page disk faults
cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs
us
Post by Michael Stapleton
sy id
0 0 0 4243840 1145720 1 6 0 0 0 0 2 0 1 1 1 9767 121
37073 0
Post by Michael Stapleton
54 46
0 0 0 4157824 1059796 4 11 0 0 0 0 0 0 0 0 0 9752 119
37132 0
Post by Michael Stapleton
54 46
0 0 0 4157736 1059752 0 0 0 0 0 0 0 0 0 0 0 9769 113
37194 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9682 104
36941 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9769 105
37208 0
Post by Michael Stapleton
54 46
0 0 0 4157728 1059772 0 1 0 0 0 0 0 0 0 0 0 9741 159
37104 0
Post by Michael Stapleton
54 46
0 0 0 4157728 1059772 0 0 0 0 0 0 0 0 0 0 0 9695 127
36931 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9762 105
37188 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9723 102
37058 0
Post by Michael Stapleton
54 46
0 0 0 4157744 1059788 0 0 0 0 0 0 0 0 0 0 0 9774 105
37263 0
Post by Michael Stapleton
54 46
Mike
Post by Rennie Allen
Sched is the scheduler itself. How long did you let this run? If
only
Post by Michael Stapleton
Post by Rennie Allen
for a couple of seconds, then that number is high, but not
ridiculous for
Post by Michael Stapleton
Post by Rennie Allen
a loaded system, so I think that this output rules out a high context
switch rate.
Try this command to see if some process is making an excessive
number of
Post by Michael Stapleton
Post by Rennie Allen
If not, then I'd try looking at interrupts...
Post by Gernot Wolf
Yeah, I've been able to run this diagnostics on another OI box (at
my
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
office, so much for OI not being used in production ;)), and noticed
that there were several values that were quite different. I just
don't
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
have any idea on the meaning of this figures...
Anyway, here are the results of the dtrace command (I executed the
@[execname]=count()}'
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
ipmgmtd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gconfd-2
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-settings-d
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
idmapd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
inetd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
miniserv.pl
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
netcfgd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nscd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ospm-applet
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ssh-agent
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sshd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.startd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
intrd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
afpd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mdnsd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-power-mana
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
clock-applet
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sendmail
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
xscreensaver
7
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fmd
9
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fsflush
11
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ntpd
11
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
updatemanagernot
13
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
isapython2.6
14
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
devfsadm
20
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-terminal
20
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace
23
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mixer_applet2
25
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
smbd
39
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nwam-manager
60
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.configd
79
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Xorg
100
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sched
394078
@[execname]=count()}'
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace: description 'sched:::off-cpu ' matched 3 probes
^C
automountd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ipmgmtd
1
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
idmapd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
in.routed
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
init
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
miniserv.pl
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
netcfgd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ssh-agent
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sshd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.startd
2
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fmd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
hald
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
inetd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
intrd
3
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
hald-addon-acpi
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nscd
4
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-power-mana
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sendmail
5
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mdnsd
6
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
devfsadm
8
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
xscreensaver
9
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
fsflush
10
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
ntpd
14
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
updatemanagernot
16
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
mixer_applet2
21
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
isapython2.6
22
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
dtrace
24
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
gnome-terminal
24
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
smbd
39
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
nwam-manager
58
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
zpool-rpool
65
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
svc.configd
79
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Xorg
82
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
sched
369939
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
So, quite obviously there is one executable standing out here,
"sched",
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
now what's the meaning of this figures?
Regards,
Gernot Wolf
Post by Michael Stapleton
Hi Gernot,
You have a high context switch rate.
try
For a few seconds to see if you can get the name of and
executable.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Mike
Post by Gernot Wolf
Hello all,
I have a machine here at my home running OpenIndiana oi_151a,
which
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
serves as a NAS on my home network. The original install was
OpenSolaris
2009.6 which was later upgraded to snv_134b, and recently to
oi_151a.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
So far this OSOL (now OI) box has performed excellently, with
one major
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
exception: Sometimes, after a reboot, the cpu load was about
50-60%,
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
although the system was doing nothing. Until recently, another
reboot
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
solved the issue.
This does not work any longer. The system has always a cpu load
of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
50-60% when idle (and higher of course when there is actually
some work
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
to do).
I've already googled the symptoms. This didn't turn up very much
useful
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
info, and the few things I found didn't apply to my problem. Most
noticably was this problem which could be solved by disabling
cpupm in
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
/etc/power.conf, but trying that didn't show any effect on my
system.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
So I'm finally out of my depth. I have to admit that my
knowledge of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
Unix is superficial at best, so I decided to try looking for
help here.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
I've run several diagnostic commands like top, powertop,
lockstat etc.
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
and attached the results to this email (I've zipped the results
of
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
kstat
because they were>1MB).
One important thing is that when I boot into the oi_151a live dvd
instead of booting into the installed system, I also get the
high cpu
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
load. I mention this because I have installed several things on
my OI
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
box like vsftpd, svn, netstat etc. I first thought that this
problem
Post by Michael Stapleton
Post by Rennie Allen
Post by Gernot Wolf
Post by Michael Stapleton
Post by Gernot Wolf
might be caused by some of this extra stuff, but getting the same
system
when booting the live dvd ruled that out (I think).
S-775 Intel DG965WHMKR ATX mainbord
Intel Core 2 Duo E4300 CPU 1.8GHz
1x IDE DVD recorder
1x IDE HD 200GB (serves as system drive)
6x SATA II 1.5TB HD (configured as zfs raidz2 array)
I have to solve this problem. Although the system runs fine and
absolutely serves it's purpose, having the cpu at 50-60% load
constantly
is a waste of energy and surely a rather unhealthy stress on the
hardware.
Anyone any ideas...?
Regards,
Gernot Wolf
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Steve Gonczi
2011-10-20 19:07:54 UTC
Permalink
i86_mwait is the idle function the cpu is executing when it has
nothing else to do. Basically it sleeps inside of that function.

Lockstat based profiling just samples what is on cpu, so idle time
shows up as some form of mwait, depending on how the bios
is configured.

Steve


----- Original Message -----
On Thu, Oct 20, 2011 at 20:33, Michael Stapleton

if you're answering my question: I'm not guessing that much: I looked
at lockstat output, and right there at the top we see i86_mwait
consuming 45%(!) .
Steve Gonczi
2011-10-20 19:21:09 UTC
Permalink
Here is something to check:

Pop into the debugger ( mdb -k) and see what AcpiDbgLevel's current setting is.

E.g:

AcpiDbgLevel/x

The default setting is 3. If its something higher, that would explain the high incidence of
Acpi trace/debug calls.

To exit the debugger type $q or ::quit


Steve



----- Original Message -----
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.

I've zipped it an attached it to this mail, maybe someone can get
anything out of it...

Regards,
Gernot


_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-***@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
Gernot Wolf
2011-10-20 20:10:58 UTC
Permalink
Ok, here we go:

***@tintenfass:~# mdb -k
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc
pcplusmp scsi_vhci zfs ip hook neti sockfs arp usba uhci s1394 fctl
stmf_sbd stmf idm fcip cpc random sata crypto sd lofs logindmux ptm ufs
sppp smbsrv nfs ipc ]
Post by Steve Gonczi
AcpiDbgLevel
AcpiDbgLevel/x
AcpiDbgLevel:
AcpiDbgLevel: 0
Post by Steve Gonczi
q
mdb: failed to dereference symbol: unknown symbol name
Post by Steve Gonczi
$q
So, ApciDbgLevel seems to be 0??? Now I'm getting confused... shouldn't
that mean no Apci debug function calls at all?

Regards,
Gernot Wolf
Post by Steve Gonczi
Pop into the debugger ( mdb -k) and see what AcpiDbgLevel's current setting is.
AcpiDbgLevel/x
The default setting is 3. If its something higher, that would explain the high incidence of
Acpi trace/debug calls.
To exit the debugger type $q or ::quit
Steve
----- Original Message -----
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Michael Stapleton
2011-10-20 20:20:19 UTC
Permalink
I would not worry about it. The messages are being caused by some
problem. Lets focus on getting the messages.
Debug will increase your load, but not like you are seeing.

Mike
Post by Gernot Wolf
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc
pcplusmp scsi_vhci zfs ip hook neti sockfs arp usba uhci s1394 fctl
stmf_sbd stmf idm fcip cpc random sata crypto sd lofs logindmux ptm ufs
sppp smbsrv nfs ipc ]
Post by Steve Gonczi
AcpiDbgLevel
AcpiDbgLevel/x
AcpiDbgLevel: 0
Post by Steve Gonczi
q
mdb: failed to dereference symbol: unknown symbol name
Post by Steve Gonczi
$q
So, ApciDbgLevel seems to be 0??? Now I'm getting confused... shouldn't
that mean no Apci debug function calls at all?
Regards,
Gernot Wolf
Post by Steve Gonczi
Pop into the debugger ( mdb -k) and see what AcpiDbgLevel's current setting is.
AcpiDbgLevel/x
The default setting is 3. If its something higher, that would explain the high incidence of
Acpi trace/debug calls.
To exit the debugger type $q or ::quit
Steve
----- Original Message -----
You mean, besides being quite huge? I took a quick look at it, but other
than getting a headache by doing that, my limited unix skills
unfortunately fail me.
I've zipped it an attached it to this mail, maybe someone can get
anything out of it...
Regards,
Gernot
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
http://openindiana.org/mailman/listinfo/openindiana-discuss
Continue reading on narkive:
Loading...