Discussion:
[pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone
Alexandre Derumier
2018-05-28 15:36:50 UTC
Permalink
Some storage like rbd or lvm can't keep thin-provising after a qemu-mirror.

Call qga guest-fstrim if qga is available
---
PVE/API2/Qemu.pm | 8 ++++++++
PVE/QemuMigrate.pm | 5 +++++
2 files changed, 13 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8d4b10d..86fac9d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2741,6 +2741,10 @@ __PACKAGE__->register_method({

PVE::QemuConfig->write_config($newid, $newconf);

+ if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on several nodes
PVE::Storage::deactivate_volumes($storecfg, $vollist, $snapname) if !$running;
@@ -2918,6 +2922,10 @@ __PACKAGE__->register_method({

PVE::QemuConfig->write_config($vmid, $conf);

+ if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
eval {
# try to deactivate volumes - avoid lvm LVs to be active on several nodes
PVE::Storage::deactivate_volumes($storecfg, [ $newdrive->{file} ])
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 27cf7e3..ab2258d 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -966,6 +966,11 @@ sub phase3_cleanup {
$self->{errors} = 1;
}
}
+
+ if ($self->{storage_migration} && $conf->{qga} && $self->{running}) {
+ my $cmd = [@{$self->{rem_ssh}}, 'qm', 'agent','fstrim'];
+ eval{ PVE::Tools::run_command($cmd, outfunc => sub {}, errfunc => sub {}) };
+ }
}

# close tunnel on successful migration, on error phase2_cleanup closed it
--
2.11.0
Dietmar Maurer
2018-05-28 17:29:36 UTC
Permalink
And we really want to ignore error messages?
Post by Alexandre Derumier
+ if ($running && $conf->{agent} &&
PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
Alexandre DERUMIER
2018-05-28 18:50:30 UTC
Permalink
Post by Alexandre Derumier
Post by Dietmar Maurer
And we really want to ignore error messages?
I think yes, it's best effort. This don't break migration/mirroring.

----- Mail original -----
De: "dietmar" <***@proxmox.com>
À: "aderumier" <***@odiso.com>, "pve-devel" <pve-***@pve.proxmox.com>
Envoyé: Lundi 28 Mai 2018 19:29:36
Objet: Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone

And we really want to ignore error messages?
Post by Alexandre Derumier
+ if ($running && $conf->{agent} &&
PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
Alwin Antreich
2018-05-28 17:50:24 UTC
Permalink
Post by Alexandre Derumier
Some storage like rbd or lvm can't keep thin-provising after a qemu-mirror.
Call qga guest-fstrim if qga is available
---
PVE/API2/Qemu.pm | 8 ++++++++
PVE/QemuMigrate.pm | 5 +++++
2 files changed, 13 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8d4b10d..86fac9d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2741,6 +2741,10 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($newid, $newconf);
+ if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on several nodes
PVE::Storage::deactivate_volumes($storecfg, $vollist, $snapname) if !$running;
@@ -2918,6 +2922,10 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($vmid, $conf);
+ if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
eval {
# try to deactivate volumes - avoid lvm LVs to be active on several nodes
PVE::Storage::deactivate_volumes($storecfg, [ $newdrive->{file} ])
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 27cf7e3..ab2258d 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -966,6 +966,11 @@ sub phase3_cleanup {
$self->{errors} = 1;
}
}
+
+ if ($self->{storage_migration} && $conf->{qga} && $self->{running}) {
+ eval{ PVE::Tools::run_command($cmd, outfunc => sub {}, errfunc => sub {}) };
+ }
}
# close tunnel on successful migration, on error phase2_cleanup closed it
--
2.11.0
I have some thoughts on your patch.

If I understood it right, then the fstrim is called on every migrate with a
running guest agent. While, I guess the command is called also if you don't
have discard in the vm config activated and might only produce a error
message.

Some users also like some of their VMs to be thick provisioned.

With multiple simultanious migrations though this would extend/multiply the
IO load on the target system. As the fstrim starts, while still other VMs are
migrated. I think that might make users unhappy, especially that the behaviour
would change with your patch.

IMHO, it might be good to have a config option that sets if a VM should do a
fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting
it and are knowing that this also has its drawbacks on their systems.

Please correct me if I'm wrong.
My two cents. ;)
--
Cheers,
Alwin
Alexandre DERUMIER
2018-05-28 18:49:29 UTC
Permalink
Post by Alexandre Derumier
Post by Alwin Antreich
If I understood it right, then the fstrim is called on every migrate with a
running guest agent. While, I guess the command is called also if you don't
have discard in the vm config activated and might only produce a error
message.
It don't produce an error, but indeed, it does nothing in this case.
I can add a check for discard option, to avoid the extra qga call.
Post by Alexandre Derumier
Post by Alwin Antreich
IMHO, it might be good to have a config option that sets if a VM should do a
fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting
it and are knowing that this also has its drawbacks on their systems.
maybe can we add it in datacenter.cfg ? or storage.cfg option ?


----- Mail original -----
De: "Alwin Antreich" <***@proxmox.com>
À: "pve-devel" <pve-***@pve.proxmox.com>
Envoyé: Lundi 28 Mai 2018 19:50:24
Objet: Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone
Post by Alexandre Derumier
Some storage like rbd or lvm can't keep thin-provising after a qemu-mirror.
Call qga guest-fstrim if qga is available
---
PVE/API2/Qemu.pm | 8 ++++++++
PVE/QemuMigrate.pm | 5 +++++
2 files changed, 13 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8d4b10d..86fac9d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2741,6 +2741,10 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($newid, $newconf);
+ if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on several nodes
PVE::Storage::deactivate_volumes($storecfg, $vollist, $snapname) if !$running;
@@ -2918,6 +2922,10 @@ __PACKAGE__->register_method({
PVE::QemuConfig->write_config($vmid, $conf);
+ if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
+ }
+
eval {
# try to deactivate volumes - avoid lvm LVs to be active on several nodes
PVE::Storage::deactivate_volumes($storecfg, [ $newdrive->{file} ])
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 27cf7e3..ab2258d 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -966,6 +966,11 @@ sub phase3_cleanup {
$self->{errors} = 1;
}
}
+
+ if ($self->{storage_migration} && $conf->{qga} && $self->{running}) {
+ eval{ PVE::Tools::run_command($cmd, outfunc => sub {}, errfunc => sub {}) };
+ }
}
# close tunnel on successful migration, on error phase2_cleanup closed it
--
2.11.0
I have some thoughts on your patch.

If I understood it right, then the fstrim is called on every migrate with a
running guest agent. While, I guess the command is called also if you don't
have discard in the vm config activated and might only produce a error
message.

Some users also like some of their VMs to be thick provisioned.

With multiple simultanious migrations though this would extend/multiply the
IO load on the target system. As the fstrim starts, while still other VMs are
migrated. I think that might make users unhappy, especially that the behaviour
would change with your patch.

IMHO, it might be good to have a config option that sets if a VM should do a
fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting
it and are knowing that this also has its drawbacks on their systems.

Please correct me if I'm wrong.
My two cents. ;)
--
Cheers,
Alwin

_______________________________________________
pve-devel mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Alwin Antreich
2018-05-29 07:19:39 UTC
Permalink
Post by Alexandre DERUMIER
Post by Alwin Antreich
If I understood it right, then the fstrim is called on every migrate with a
running guest agent. While, I guess the command is called also if you don't
have discard in the vm config activated and might only produce a error
message.
It don't produce an error, but indeed, it does nothing in this case.
I can add a check for discard option, to avoid the extra qga call.
If there is a config option (see below) then no extra care would be
needed here.
Post by Alexandre DERUMIER
Post by Alwin Antreich
IMHO, it might be good to have a config option that sets if a VM should do a
fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting
it and are knowing that this also has its drawbacks on their systems.
maybe can we add it in datacenter.cfg ? or storage.cfg option ?
I would think, the best is to have it in the vmid.conf itself, as maybe there
is one VM where I want to have flat images (eg. DB server) and not for the
rest.

Maybe in a fashion like, qga: fstrim=1,guest-exec=..., I guess this
makes extending the guest-agent commands more straight forward too.
Alexandre DERUMIER
2018-05-29 08:02:45 UTC
Permalink
Post by Alexandre DERUMIER
Post by Alwin Antreich
I would think, the best is to have it in the vmid.conf itself, as maybe there
is one VM where I want to have flat images (eg. DB server) and not for the
rest.
But I think that if user want flat image, it'll not use the discard option for disk ?
(so fstrim don't do nothing)


I think we need an explicit option like "do fstrim or not after clone disk", more like if user don't want fstrim overhead
on the storage.
Post by Alexandre DERUMIER
Post by Alwin Antreich
Maybe in a fashion like, qga: fstrim=1,guest-exec=..., I guess this
makes extending the guest-agent commands more straight forward too.
maybe something like qga: clonedisk-fstrim=1, ... ?

(fstrim=1 sound like we always allow fstrim or not )


----- Mail original -----
De: "Alwin Antreich" <***@proxmox.com>
À: "pve-devel" <pve-***@pve.proxmox.com>
Envoyé: Mardi 29 Mai 2018 09:19:39
Objet: Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone
Post by Alexandre DERUMIER
Post by Alwin Antreich
Post by Alwin Antreich
If I understood it right, then the fstrim is called on every migrate with a
running guest agent. While, I guess the command is called also if you don't
have discard in the vm config activated and might only produce a error
message.
It don't produce an error, but indeed, it does nothing in this case.
I can add a check for discard option, to avoid the extra qga call.
If there is a config option (see below) then no extra care would be
needed here.
Post by Alexandre DERUMIER
Post by Alwin Antreich
Post by Alwin Antreich
IMHO, it might be good to have a config option that sets if a VM should do a
fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting
it and are knowing that this also has its drawbacks on their systems.
maybe can we add it in datacenter.cfg ? or storage.cfg option ?
I would think, the best is to have it in the vmid.conf itself, as maybe there
is one VM where I want to have flat images (eg. DB server) and not for the
rest.

Maybe in a fashion like, qga: fstrim=1,guest-exec=..., I guess this
makes extending the guest-agent commands more straight forward too.


_______________________________________________
pve-devel mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Alwin Antreich
2018-05-29 12:03:09 UTC
Permalink
Post by Alexandre DERUMIER
Post by Alwin Antreich
I would think, the best is to have it in the vmid.conf itself, as maybe there
is one VM where I want to have flat images (eg. DB server) and not for the
rest.
But I think that if user want flat image, it'll not use the discard option for disk ?
(so fstrim don't do nothing)
True.
Post by Alexandre DERUMIER
I think we need an explicit option like "do fstrim or not after clone disk", more like if user don't want fstrim overhead
on the storage.
With the above in mind the storage.cfg sounds to me as the better place.

On the other hand, as this is a work-around for the underlying problem,
of qemu mirror not being able to detect zeros in data. Might it not be
easier to throw in an API call afterwards that does the fstrim? Would be
less to think about and maintain, till qemu can do it out-of-the-box.

# pvesh create /nodes/{node}/qemu/{vmid}/agent/fstrim
or by hand
# qm agent <vmid> fstrim

I assume, there are only two use cases with fstrim:

*) many machines, clones and migrations
In this case, there is for sure some automation going on and the
user would call the API after each action taken. This way the fstrim
can be run controlled (eg. monitor IO load).

*) few machines, seldom cloning or migration
Here the user would do manual run of the command or has a small
script inside/outside the VM that runs the fstrim.
Alexandre DERUMIER
2018-05-29 13:35:06 UTC
Permalink
Post by Alexandre DERUMIER
Post by Alwin Antreich
On the other hand, as this is a work-around for the underlying problem,
of qemu mirror not being able to detect zeros in data. Might it not be
easier to throw in an API call afterwards that does the fstrim? Would be
less to think about and maintain, till qemu can do it out-of-the-box.
# pvesh create /nodes/{node}/qemu/{vmid}/agent/fstrim
or by hand
# qm agent <vmid> fstrim
*) many machines, clones and migrations
In this case, there is for sure some automation going on and the
user would call the API after each action taken. This way the fstrim
can be run controlled (eg. monitor IO load).
Sometime, I'm doing a lot of move disk in parallel, waiting some minutes/hours to finish.
for this case, it's a lot simplier to have the fstrim done automaticaly just after the move.
(could be an option in the move disk api ?)
Post by Alexandre DERUMIER
Post by Alwin Antreich
*) few machines, seldom cloning or migration
Here the user would do manual run of the command or has a small
script inside/outside the VM that runs the fstrim.
for this case, I think that a simple "fstrim" button in the gui could be enough.








----- Mail original -----
De: "Alwin Antreich" <***@proxmox.com>
À: "pve-devel" <pve-***@pve.proxmox.com>
Envoyé: Mardi 29 Mai 2018 14:03:09
Objet: Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone
Post by Alexandre DERUMIER
Post by Alwin Antreich
Post by Alwin Antreich
I would think, the best is to have it in the vmid.conf itself, as maybe there
is one VM where I want to have flat images (eg. DB server) and not for the
rest.
But I think that if user want flat image, it'll not use the discard option for disk ?
(so fstrim don't do nothing)
True.
Post by Alexandre DERUMIER
I think we need an explicit option like "do fstrim or not after clone disk", more like if user don't want fstrim overhead
on the storage.
With the above in mind the storage.cfg sounds to me as the better place.

On the other hand, as this is a work-around for the underlying problem,
of qemu mirror not being able to detect zeros in data. Might it not be
easier to throw in an API call afterwards that does the fstrim? Would be
less to think about and maintain, till qemu can do it out-of-the-box.

# pvesh create /nodes/{node}/qemu/{vmid}/agent/fstrim
or by hand
# qm agent <vmid> fstrim

I assume, there are only two use cases with fstrim:

*) many machines, clones and migrations
In this case, there is for sure some automation going on and the
user would call the API after each action taken. This way the fstrim
can be run controlled (eg. monitor IO load).

*) few machines, seldom cloning or migration
Here the user would do manual run of the command or has a small
script inside/outside the VM that runs the fstrim.


_______________________________________________
pve-devel mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Thomas Lamprecht
2018-05-30 05:39:00 UTC
Permalink
Post by Alexandre Derumier
Some storage like rbd or lvm can't keep thin-provising after a qemu-mirror.
Call qga guest-fstrim if qga is available
Shouldn't the VM internal fstrim cronjob/timer (or windows equivalent)
do this anyway sooner or later?

I mean, although I understand your use case, this can really be a big and
long operation, not sure if we want to do this unconditionally for qga
enabled VMs.

If we actually do this I rather would go the way proposed in this thread
and convert qga to an format_string, then we could convert the QGA UI
dialog to a small component with checkbox(es):

qga: 1,fstrim_cloned_disks=1

(it could be argued what the default should be)

Maybe we get, or even have already, some more operations regarding the
QGA where it makes sense to have a similar approach.

cheers,
Thomas
Alexandre DERUMIER
2018-05-30 05:48:16 UTC
Permalink
Post by Alexandre Derumier
Post by Thomas Lamprecht
Shouldn't the VM internal fstrim cronjob/timer (or windows equivalent)
do this anyway sooner or later?
yes sure, I'm already doing it. (around 1 fstrim by week random on all my vms).


But sometime I can't wait (mainly for big volume).
Post by Alexandre Derumier
Post by Thomas Lamprecht
I mean, although I understand your use case, this can really be a big and
long operation.
yes . it's depend of the storage and free io/s available. (personnaly, I don't care, I have only ssd storage)
Post by Alexandre Derumier
Post by Thomas Lamprecht
not sure if we want to do this unconditionally for qga enabled VMs.
Yes, better if user can choose to do it or not.
Post by Alexandre Derumier
Post by Thomas Lamprecht
If we actually do this I rather would go the way proposed in this thread
and convert qga to an format_string, then we could convert the QGA UI
qga: 1,fstrim_cloned_disks=1
sound good to me.


Regards,

Alexandre

----- Mail original -----
De: "Thomas Lamprecht" <***@proxmox.com>
À: "pve-devel" <pve-***@pve.proxmox.com>, "aderumier" <***@odiso.com>
Envoyé: Mercredi 30 Mai 2018 07:39:00
Objet: Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone
Post by Alexandre Derumier
Some storage like rbd or lvm can't keep thin-provising after a qemu-mirror.
Call qga guest-fstrim if qga is available
Shouldn't the VM internal fstrim cronjob/timer (or windows equivalent)
do this anyway sooner or later?

I mean, although I understand your use case, this can really be a big and
long operation, not sure if we want to do this unconditionally for qga
enabled VMs.

If we actually do this I rather would go the way proposed in this thread
and convert qga to an format_string, then we could convert the QGA UI
dialog to a small component with checkbox(es):

qga: 1,fstrim_cloned_disks=1

(it could be argued what the default should be)

Maybe we get, or even have already, some more operations regarding the
QGA where it makes sense to have a similar approach.

cheers,
Thomas
Stoiko Ivanov
2018-07-05 17:42:33 UTC
Permalink
Signed-off-by: Stoiko Ivanov <***@proxmox.com>
---
PVE/QemuMigrate.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index ab2258d..ca09469 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -968,7 +968,7 @@ sub phase3_cleanup {
}

if ($self->{storage_migration} && $conf->{qga} && $self->{running}) {
- my $cmd = [@{$self->{rem_ssh}}, 'qm', 'agent','fstrim'];
+ my $cmd = [@{$self->{rem_ssh}}, 'qm', 'agent', $vmid, 'fstrim'];
eval{ PVE::Tools::run_command($cmd, outfunc => sub {}, errfunc => sub {}) };
}
}
--
2.11.0
Stoiko Ivanov
2018-07-05 17:42:35 UTC
Permalink
as conditional when trimming after move_disk, migrate or clone

Signed-off-by: Stoiko Ivanov <***@proxmox.com>
---
PVE/API2/Qemu.pm | 4 ++--
PVE/QemuMigrate.pm | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 6dc2489..d157ba6 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2773,7 +2773,7 @@ __PACKAGE__->register_method({

PVE::QemuConfig->write_config($newid, $newconf);

- if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ if ($running && PVE::QemuServer::parse_guest_agent($conf)->{fstrim_cloned_disks} && PVE::QemuServer::qga_check_running($vmid)) {
eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
}

@@ -2954,7 +2954,7 @@ __PACKAGE__->register_method({

PVE::QemuConfig->write_config($vmid, $conf);

- if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) {
+ if ($running && PVE::QemuServer::parse_guest_agent($conf)->{fstrim_cloned_disks} && PVE::QemuServer::qga_check_running($vmid)) {
eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); };
}

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index ca09469..39546c9 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -967,7 +967,7 @@ sub phase3_cleanup {
}
}

- if ($self->{storage_migration} && $conf->{qga} && $self->{running}) {
+ if ($self->{storage_migration} && PVE::QemuServer::parse_guest_agent($conf)->{fstrim_cloned_disks} && $self->{running}) {
my $cmd = [@{$self->{rem_ssh}}, 'qm', 'agent', $vmid, 'fstrim'];
eval{ PVE::Tools::run_command($cmd, outfunc => sub {}, errfunc => sub {}) };
}
--
2.11.0
Stoiko Ivanov
2018-07-05 17:42:34 UTC
Permalink
Signed-off-by: Stoiko Ivanov <***@proxmox.com>
---
PVE/API2/Qemu.pm | 3 +--
PVE/QemuConfig.pm | 2 +-
PVE/QemuServer.pm | 38 +++++++++++++++++++++++++++++++++-----
3 files changed, 35 insertions(+), 8 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index db28ed0..6dc2489 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1875,8 +1875,7 @@ __PACKAGE__->register_method({
$status->{ha} = PVE::HA::Config::get_service_status("vm:$param->{vmid}");

$status->{spice} = 1 if PVE::QemuServer::vga_conf_has_spice($conf->{vga});
-
- $status->{agent} = 1 if $conf->{agent};
+ $status->{agent} = 1 if (PVE::QemuServer::parse_guest_agent($conf)->{enabled});

return $status;
}});
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 9a29b53..b24773c 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -161,7 +161,7 @@ sub __snapshot_check_freeze_needed {

my $running = $class->__snapshot_check_running($vmid);
if (!$save_vmstate) {
- return ($running, $running && $config->{agent} && PVE::QemuServer::qga_check_running($vmid));
+ return ($running, $running && PVE::QemuServer::parse_guest_agent($config)->{enabled} && PVE::QemuServer::qga_check_running($vmid));
} else {
return ($running, 0);
}
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5829e4c..8a8fdb6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -200,6 +200,21 @@ my $watchdog_fmt = {
};
PVE::JSONSchema::register_format('pve-qm-watchdog', $watchdog_fmt);

+my $agent_fmt = {
+ enabled => {
+ description => "Enable/disable Qemu GuestAgent.",
+ type => 'boolean',
+ default => 0,
+ default_key => 1,
+ },
+ fstrim_cloned_disks => {
+ description => "Run fstrim after cloning/moving a disk.",
+ type => 'boolean',
+ optional => 1,
+ default => 0
+ },
+};
+
my $confdesc = {
onboot => {
optional => 1,
@@ -380,9 +395,9 @@ EODESC
},
agent => {
optional => 1,
- type => 'boolean',
- description => "Enable/disable Qemu GuestAgent.",
- default => 0,
+ description => "Enable/disable Qemu GuestAgent and its properties.",
+ type => 'string',
+ format => $agent_fmt,
},
kvm => {
optional => 1,
@@ -2211,6 +2226,19 @@ sub parse_watchdog {
return $res;
}

+sub parse_guest_agent {
+ my ($value) = @_;
+
+ return {} if !defined($value->{agent});
+
+ my $res = eval { PVE::JSONSchema::parse_property_string($agent_fmt, $value->{agent}) };
+ warn $@ if $@;
+
+ # if the agent is disabled ignore the other potentially set properties
+ return {} if !$res->{enabled};
+ return $res;
+}
+
PVE::JSONSchema::register_format('pve-qm-usb-device', \&verify_usb_device);
sub verify_usb_device {
my ($value, $noerr) = @_;
@@ -3395,7 +3423,7 @@ sub config_to_command {
#push @$cmd, '-soundhw', 'es1370';
#push @$cmd, '-soundhw', $soundhw if $soundhw;

- if($conf->{agent}) {
+ if (parse_guest_agent($conf)->{enabled}) {
my $qgasocket = qmp_socket($vmid, 1);
my $pciaddr = print_pci_addr("qga0", $bridges);
push @$devices, '-chardev', "socket,path=$qgasocket,server,nowait,id=qga0";
@@ -5095,7 +5123,7 @@ sub vm_stop {

eval {
if ($shutdown) {
- if (defined($conf) && $conf->{agent}) {
+ if (defined($conf) && parse_guest_agent($conf)->{enabled}) {
vm_qmp_command($vmid, { execute => "guest-shutdown" }, $nocheck);
} else {
vm_qmp_command($vmid, { execute => "system_powerdown" }, $nocheck);
--
2.11.0
Stoiko Ivanov
2018-07-05 17:42:36 UTC
Permalink
Change to Qemu Guest Agent now being a property_string, and including
fstrim_cloned_disks.

Signed-off-by: Stoiko Ivanov <***@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/Utils.js | 10 +++++
www/manager6/form/AgentFeatureSelector.js | 65 +++++++++++++++++++++++++++++++
www/manager6/qemu/Options.js | 10 ++---
4 files changed, 80 insertions(+), 6 deletions(-)
create mode 100644 www/manager6/form/AgentFeatureSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 3cc6990f..28d21fa7 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -49,6 +49,7 @@ JSSRC= \
form/SnapshotSelector.js \
form/ContentTypeSelector.js \
form/HotplugFeatureSelector.js \
+ form/AgentFeatureSelector.js \
form/iScsiProviderSelector.js \
form/DayOfWeekSelector.js \
form/BackupModeSelector.js \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index ad5a0a61..58f92788 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -175,6 +175,16 @@ Ext.define('PVE.Utils', { utilities: {
return fa.join(', ');
},

+ render_qga_features: function(value) {
+ if (!value) {
+ return Proxmox.Utils.defaultText + ' (' + gettext('Disabled') + ')';
+ } else if (value === '0') {
+ return gettext('Disabled');
+ } else {
+ return value;
+ }
+ },
+
render_qemu_bios: function(value) {
if (!value) {
return Proxmox.Utils.defaultText + ' (SeaBIOS)';
diff --git a/www/manager6/form/AgentFeatureSelector.js b/www/manager6/form/AgentFeatureSelector.js
new file mode 100644
index 00000000..cd74721b
--- /dev/null
+++ b/www/manager6/form/AgentFeatureSelector.js
@@ -0,0 +1,65 @@
+Ext.define('PVE.form.AgentFeatureSelector', {
+ extend: 'Ext.form.CheckboxGroup',
+ alias: 'widget.pveAgentFeatureSelector',
+
+ columns: 1,
+ vertical: true,
+ items: [
+ { boxLabel: gettext('Enable Qemu Agent'), name: 'agent', inputValue: 'enabled', submitValue: false },
+ { boxLabel: gettext('Run guest-trim after clone disk'), name: 'agent', inputValue: 'fstrim_cloned_disks', submitValue: false }
+ ],
+
+ setValue: function(value) {
+ var me = this;
+
+ var newVal = [];
+
+ var errors = false;
+ Ext.Array.forEach(value.split(','), function(val){
+ if (val === '1') {
+ newVal.push('enabled');
+ return;
+ }
+
+ var match_res = val.match(/^([a-z_]+)=(\S+)$/);
+ if (!match_res) {
+ newVal = [];
+ return false;
+ }
+
+ var k = match_res[1];
+ if (match_res[2] == 1) {
+ newVal.push(k);
+ }
+ });
+
+ if (errors) {
+ newVal = [];
+ }
+ me.callParent([{ agent: newVal }]);
+ },
+
+ // overide framework function to
+ // assemble the qemu guest agent value
+ getSubmitData: function() {
+ var me = this,
+ boxes = me.getBoxes(),
+ data = [];
+
+ Ext.Array.forEach(boxes, function(box){
+ if (box.getValue() && box.inputValue == 'enabled') {
+ data.push('1');
+ } else if (box.getValue()) {
+ data.push(box.inputValue+'=1');
+ }
+ });
+ /* because agent is an array */
+ /*jslint confusion: true*/
+ if (data.length === 0) {
+ return { 'agent': '0' };
+ } else {
+ return { 'agent': data.join(',') };
+ }
+ }
+
+});
diff --git a/www/manager6/qemu/Options.js b/www/manager6/qemu/Options.js
index 6f0b2511..16249fb4 100644
--- a/www/manager6/qemu/Options.js
+++ b/www/manager6/qemu/Options.js
@@ -266,17 +266,15 @@ Ext.define('PVE.qemu.Options', {
agent: {
header: gettext('Qemu Agent'),
defaultValue: false,
- renderer: Proxmox.Utils.format_boolean,
+ renderer: PVE.Utils.render_qga_features,
editor: caps.vms['VM.Config.Options'] ? {
xtype: 'proxmoxWindowEdit',
subject: gettext('Qemu Agent'),
items: {
- xtype: 'proxmoxcheckbox',
+ xtype: 'pveAgentFeatureSelector',
name: 'agent',
- uncheckedValue: 0,
- defaultValue: 0,
- deleteDefaultValue: true,
- fieldLabel: gettext('Enabled')
+ value: '',
+ allowBlank: true
}
} : undefined
},
--
2.11.0
Thomas Lamprecht
2018-07-06 07:30:16 UTC
Permalink
Post by Stoiko Ivanov
Change to Qemu Guest Agent now being a property_string, and including
fstrim_cloned_disks.
---
www/manager6/Makefile | 1 +
www/manager6/Utils.js | 10 +++++
www/manager6/form/AgentFeatureSelector.js | 65 +++++++++++++++++++++++++++++++
www/manager6/qemu/Options.js | 10 ++---
4 files changed, 80 insertions(+), 6 deletions(-)
create mode 100644 www/manager6/form/AgentFeatureSelector.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 3cc6990f..28d21fa7 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -49,6 +49,7 @@ JSSRC= \
form/SnapshotSelector.js \
form/ContentTypeSelector.js \
form/HotplugFeatureSelector.js \
+ form/AgentFeatureSelector.js \
form/iScsiProviderSelector.js \
form/DayOfWeekSelector.js \
form/BackupModeSelector.js \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index ad5a0a61..58f92788 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -175,6 +175,16 @@ Ext.define('PVE.Utils', { utilities: {
return fa.join(', ');
},
+ render_qga_features: function(value) {
+ if (!value) {
+ return Proxmox.Utils.defaultText + ' (' + gettext('Disabled') + ')';
+ } else if (value === '0') {
+ return gettext('Disabled');
+ } else {
+ return value;
+ }
+ },
+
render_qemu_bios: function(value) {
if (!value) {
return Proxmox.Utils.defaultText + ' (SeaBIOS)';
diff --git a/www/manager6/form/AgentFeatureSelector.js b/www/manager6/form/AgentFeatureSelector.js
new file mode 100644
index 00000000..cd74721b
--- /dev/null
+++ b/www/manager6/form/AgentFeatureSelector.js
@@ -0,0 +1,65 @@
+Ext.define('PVE.form.AgentFeatureSelector', {
+ extend: 'Ext.form.CheckboxGroup',
+ alias: 'widget.pveAgentFeatureSelector',
+
+ columns: 1,
+ vertical: true,
+ items: [
+ { boxLabel: gettext('Enable Qemu Agent'), name: 'agent', inputValue: 'enabled', submitValue: false },
+ { boxLabel: gettext('Run guest-trim after clone disk'), name: 'agent', inputValue: 'fstrim_cloned_disks', submitValue: false }
Please, no, don't! Sorry, but this looks horrible...
And it makes review hard.

I'd normally say that there's not a single place in our JS codebase doing
this, but I just openend the Hotplug Option Editor (oh dear...) and know now where
you got that idea... I refactored it using ExtJS 'defaults' feature to group common
configs together and formatted it a bit more sanely.

Let's take a look at yours sanely formatted:
items: [
{
boxLabel: gettext('Enable Qemu Agent'),

* We try to avoid "Enable" in boolean checkboxes,
just more text (to read an translate) but no value gained.

name: 'agent',
inputValue: 'enabled',
submitValue: false

name and agent could be moved into a defaults section.

},
{
boxLabel: gettext('Run guest-trim after clone disk'),
name: 'agent',
inputValue: 'fstrim_cloned_disks',
submitValue: false
}
],


But how about just building upon a our InputPanel and do something like:

Ext.define('PVE.form.AgentFeatureSelector', {
extend: 'Proxmox.panel.InputPanel',
alias: 'widget.pveAgentFeatureSelector',

column1: [
{
xtype: 'proxmoxcheckbox',
boxLabel: gettext('Qemu Agent'),
name: 'enabled'
},
{
xtype: 'proxmoxcheckbox',
boxLabel: gettext('Run guest-trim after clone disk'),
name: 'fstrim_cloned_disks'
}
],

onGetValues: function(values) {
var me = this;

var agentstr = '';
Ext.Object.each(values, function(key, value) {
if (!Ext.isDefined(value)) {
return; // continue
}
if (agentstr === '') {
agentstr += key + '=' + value;
} else {
agentstr += ',' + key + '=' + value;
}
});

return { agent: agentstr };
},

setValues: function(values) {
var agent = values.agent || '';

var res = {};

if (agent) {
Ext.Array.each(agent.split(','), function(p) {
var kv = p.split('=');
res[kv[0]] = kv[1];
});
}

this.callParent([res]);
}
});

This is a quick hack, but testings seems good.
What I additionally would like is to disable all following checkboxes
if the agent enable one is disabled (you cannot submit anyway).

Further the split and assemble formatsring code could be put in a general helper,
this isn't the first time I see this...
Post by Stoiko Ivanov
+ ],
+
+ setValue: function(value) {
+ var me = this;
+
+ var newVal = [];
+
+ var errors = false;
+ Ext.Array.forEach(value.split(','), function(val){
+ if (val === '1') {
+ newVal.push('enabled');
+ return;
+ }
+
+ var match_res = val.match(/^([a-z_]+)=(\S+)$/);
+ if (!match_res) {
+ newVal = [];
+ return false;
+ }
+
+ var k = match_res[1];
+ if (match_res[2] == 1) {
+ newVal.push(k);
+ }
+ });
+
+ if (errors) {
+ newVal = [];
+ }
+ me.callParent([{ agent: newVal }]);
+ },
+
+ // overide framework function to
+ // assemble the qemu guest agent value
this got me wondering if you copied it from somewhere, as I recognized Dominiks'
comment style (early newlines, not using the 80cc) ^^
Post by Stoiko Ivanov
+ getSubmitData: function() {
+ var me = this,
+ boxes = me.getBoxes(),
+ data = [];
+
+ Ext.Array.forEach(boxes, function(box){
+ if (box.getValue() && box.inputValue == 'enabled') {
+ data.push('1');
+ } else if (box.getValue()) {
+ data.push(box.inputValue+'=1');
+ }
+ });
+ /* because agent is an array */
+ /*jslint confusion: true*/
+ if (data.length === 0) {
+ return { 'agent': '0' };
+ } else {
+ return { 'agent': data.join(',') };
+ }
+ }
+
+});
diff --git a/www/manager6/qemu/Options.js b/www/manager6/qemu/Options.js
index 6f0b2511..16249fb4 100644
--- a/www/manager6/qemu/Options.js
+++ b/www/manager6/qemu/Options.js
@@ -266,17 +266,15 @@ Ext.define('PVE.qemu.Options', {
agent: {
header: gettext('Qemu Agent'),
defaultValue: false,
- renderer: Proxmox.Utils.format_boolean,
+ renderer: PVE.Utils.render_qga_features,
editor: caps.vms['VM.Config.Options'] ? {
xtype: 'proxmoxWindowEdit',
subject: gettext('Qemu Agent'),
items: {
- xtype: 'proxmoxcheckbox',
+ xtype: 'pveAgentFeatureSelector',
name: 'agent',
- uncheckedValue: 0,
- defaultValue: 0,
- deleteDefaultValue: true,
- fieldLabel: gettext('Enabled')
+ value: '',
+ allowBlank: true
}
} : undefined
},
Alexandre DERUMIER
2018-07-06 04:01:32 UTC
Permalink
Hi,

Thanks for the patches !
While creating the patches I got quite inconsistent behaviour w.r.t the
effectiveness of the fstrim (I used lvmthin as storage) - sometimes (quite
often) the fstrim didn't free any space or just 1-2% (remaining on 98%-100%
usage with the image being only 45% full). Maybe it's more related to lvmthin
(and the VM and it's virtualized blockdevices not noticing that they got blown
up) and works fine with ceph?
I'll do test on lvm, zfs && rbd today.
Would be grateful for some tests in other environments.
Also I'm not sure whether the trimming is useful in the clone case,
since the new VM is not started the fstrim is run against the original VM, which
at least in my tests does not bloat?
yes, indeed. I think it's not needed.

----- Mail original -----
De: "Stoiko Ivanov" <***@proxmox.com>
À: "pve-devel" <pve-***@pve.proxmox.com>
Envoyé: Jeudi 5 Juillet 2018 19:42:32
Objet: [pve-devel] [PATCH] Make Qemu Guest Agent a property_string

Fix #1242.

As suggested in the ticket this changes make the trimming after a move_disk,
clone or migrate depending on the new fstrim_cloned_disks property.

The patchset is based on Alexandre's Patch sent on 28.05.2018 (Thanks!).

One minor fixup is included (the vmid was missing in the invocation of
qm agent $vmid fstrim after migration).

While creating the patches I got quite inconsistent behaviour w.r.t the
effectiveness of the fstrim (I used lvmthin as storage) - sometimes (quite
often) the fstrim didn't free any space or just 1-2% (remaining on 98%-100%
usage with the image being only 45% full). Maybe it's more related to lvmthin
(and the VM and it's virtualized blockdevices not noticing that they got blown
up) and works fine with ceph?

Also I'm not sure whether the trimming is useful in the clone case,
since the new VM is not started the fstrim is run against the original VM, which
at least in my tests does not bloat?

Would be grateful for some tests in other environments.

Since the GUI part is my first try in javascript I would be grateful for
pointers where/what to improve (it's based on the way we do the hotplug
settings).



Stoiko Ivanov (3):
Add missing $vmid to fstrim on migrate
Make agent a property string, add fstrim_cloned_disks
use fstrim_cloned_disks property

PVE/API2/Qemu.pm | 7 +++----
PVE/QemuConfig.pm | 2 +-
PVE/QemuMigrate.pm | 4 ++--
PVE/QemuServer.pm | 38 +++++++++++++++++++++++++++++++++-----
4 files changed, 39 insertions(+), 12 deletions(-)

Stoiko Ivanov (1):
ui/qemu: Extend Qemu Guest agent

www/manager6/Makefile | 1 +
www/manager6/Utils.js | 10 +++++
www/manager6/form/AgentFeatureSelector.js | 65 +++++++++++++++++++++++++++++++
www/manager6/qemu/Options.js | 10 ++---
4 files changed, 80 insertions(+), 6 deletions(-)
create mode 100644 www/manager6/form/AgentFeatureSelector.js
--
2.11.0


_______________________________________________
pve-devel mailing list
pve-***@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Loading...